00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2029 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3294 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.063 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.064 The recommended git tool is: git 00:00:00.064 using credential 00000000-0000-0000-0000-000000000002 00:00:00.065 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.094 Fetching changes from the remote Git repository 00:00:00.096 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.132 Using shallow fetch with depth 1 00:00:00.132 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.132 > git --version # timeout=10 00:00:00.162 > git --version # 'git version 2.39.2' 00:00:00.162 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.184 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.184 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.021 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.032 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.043 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.043 > git config core.sparsecheckout # timeout=10 00:00:05.053 > git read-tree -mu HEAD # timeout=10 00:00:05.068 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.100 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.100 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:05.184 [Pipeline] Start of Pipeline 00:00:05.196 [Pipeline] library 00:00:05.197 Loading library shm_lib@master 00:00:05.197 Library shm_lib@master is cached. Copying from home. 00:00:05.210 [Pipeline] node 00:00:05.218 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.219 [Pipeline] { 00:00:05.228 [Pipeline] catchError 00:00:05.229 [Pipeline] { 00:00:05.237 [Pipeline] wrap 00:00:05.244 [Pipeline] { 00:00:05.249 [Pipeline] stage 00:00:05.250 [Pipeline] { (Prologue) 00:00:05.423 [Pipeline] sh 00:00:05.701 + logger -p user.info -t JENKINS-CI 00:00:05.715 [Pipeline] echo 00:00:05.716 Node: GP11 00:00:05.722 [Pipeline] sh 00:00:06.010 [Pipeline] setCustomBuildProperty 00:00:06.020 [Pipeline] echo 00:00:06.022 Cleanup processes 00:00:06.026 [Pipeline] sh 00:00:06.304 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.304 1387399 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.316 [Pipeline] sh 00:00:06.596 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.596 ++ grep -v 'sudo pgrep' 00:00:06.596 ++ awk '{print $1}' 00:00:06.596 + sudo kill -9 00:00:06.596 + true 00:00:06.610 [Pipeline] cleanWs 00:00:06.619 [WS-CLEANUP] Deleting project workspace... 00:00:06.619 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.624 [WS-CLEANUP] done 00:00:06.628 [Pipeline] setCustomBuildProperty 00:00:06.641 [Pipeline] sh 00:00:06.921 + sudo git config --global --replace-all safe.directory '*' 00:00:06.990 [Pipeline] httpRequest 00:00:07.019 [Pipeline] echo 00:00:07.020 Sorcerer 10.211.164.101 is alive 00:00:07.027 [Pipeline] httpRequest 00:00:07.031 HttpMethod: GET 00:00:07.031 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.032 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.041 Response Code: HTTP/1.1 200 OK 00:00:07.041 Success: Status code 200 is in the accepted range: 200,404 00:00:07.042 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:11.529 [Pipeline] sh 00:00:11.810 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:11.825 [Pipeline] httpRequest 00:00:11.842 [Pipeline] echo 00:00:11.844 Sorcerer 10.211.164.101 is alive 00:00:11.853 [Pipeline] httpRequest 00:00:11.858 HttpMethod: GET 00:00:11.859 URL: http://10.211.164.101/packages/spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:00:11.859 Sending request to url: http://10.211.164.101/packages/spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:00:11.873 Response Code: HTTP/1.1 200 OK 00:00:11.873 Success: Status code 200 is in the accepted range: 200,404 00:00:11.874 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:01:32.641 [Pipeline] sh 00:01:32.924 + tar --no-same-owner -xf spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:01:37.117 [Pipeline] sh 00:01:37.399 + git -C spdk log --oneline -n5 00:01:37.399 d005e023b raid: fix empty slot not updated in sb after resize 00:01:37.399 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:37.399 8ee2672c4 test/bdev: Add test for resized RAID with superblock 00:01:37.399 19f5787c8 raid: skip configured base bdevs in sb examine 00:01:37.399 3b9baa5f8 bdev/raid1: Support resize when increasing the size of base bdevs 00:01:37.411 [Pipeline] withCredentials 00:01:37.421 > git --version # timeout=10 00:01:37.432 > git --version # 'git version 2.39.2' 00:01:37.447 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:37.449 [Pipeline] { 00:01:37.455 [Pipeline] retry 00:01:37.457 [Pipeline] { 00:01:37.472 [Pipeline] sh 00:01:37.748 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:40.295 [Pipeline] } 00:01:40.317 [Pipeline] // retry 00:01:40.322 [Pipeline] } 00:01:40.343 [Pipeline] // withCredentials 00:01:40.352 [Pipeline] httpRequest 00:01:40.366 [Pipeline] echo 00:01:40.367 Sorcerer 10.211.164.101 is alive 00:01:40.375 [Pipeline] httpRequest 00:01:40.379 HttpMethod: GET 00:01:40.379 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:40.380 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:40.381 Response Code: HTTP/1.1 200 OK 00:01:40.382 Success: Status code 200 is in the accepted range: 200,404 00:01:40.382 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:45.189 [Pipeline] sh 00:01:45.470 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:47.380 [Pipeline] sh 00:01:47.658 + git -C dpdk log --oneline -n5 00:01:47.659 caf0f5d395 version: 22.11.4 00:01:47.659 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:47.659 dc9c799c7d vhost: fix missing spinlock unlock 00:01:47.659 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:47.659 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:47.670 [Pipeline] } 00:01:47.686 [Pipeline] // stage 00:01:47.695 [Pipeline] stage 00:01:47.697 [Pipeline] { (Prepare) 00:01:47.718 [Pipeline] writeFile 00:01:47.733 [Pipeline] sh 00:01:48.011 + logger -p user.info -t JENKINS-CI 00:01:48.023 [Pipeline] sh 00:01:48.301 + logger -p user.info -t JENKINS-CI 00:01:48.312 [Pipeline] sh 00:01:48.590 + cat autorun-spdk.conf 00:01:48.590 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.590 SPDK_TEST_NVMF=1 00:01:48.590 SPDK_TEST_NVME_CLI=1 00:01:48.590 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:48.590 SPDK_TEST_NVMF_NICS=e810 00:01:48.590 SPDK_TEST_VFIOUSER=1 00:01:48.590 SPDK_RUN_UBSAN=1 00:01:48.590 NET_TYPE=phy 00:01:48.590 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:48.590 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:48.596 RUN_NIGHTLY=1 00:01:48.601 [Pipeline] readFile 00:01:48.624 [Pipeline] withEnv 00:01:48.626 [Pipeline] { 00:01:48.639 [Pipeline] sh 00:01:48.915 + set -ex 00:01:48.915 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:48.915 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:48.915 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.915 ++ SPDK_TEST_NVMF=1 00:01:48.915 ++ SPDK_TEST_NVME_CLI=1 00:01:48.915 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:48.915 ++ SPDK_TEST_NVMF_NICS=e810 00:01:48.915 ++ SPDK_TEST_VFIOUSER=1 00:01:48.915 ++ SPDK_RUN_UBSAN=1 00:01:48.915 ++ NET_TYPE=phy 00:01:48.915 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:48.915 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:48.915 ++ RUN_NIGHTLY=1 00:01:48.915 + case $SPDK_TEST_NVMF_NICS in 00:01:48.915 + DRIVERS=ice 00:01:48.915 + [[ tcp == \r\d\m\a ]] 00:01:48.915 + [[ -n ice ]] 00:01:48.915 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:48.915 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:48.915 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:48.915 rmmod: ERROR: Module irdma is not currently loaded 00:01:48.915 rmmod: ERROR: Module i40iw is not currently loaded 00:01:48.915 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:48.915 + true 00:01:48.915 + for D in $DRIVERS 00:01:48.915 + sudo modprobe ice 00:01:48.915 + exit 0 00:01:48.924 [Pipeline] } 00:01:48.941 [Pipeline] // withEnv 00:01:48.946 [Pipeline] } 00:01:48.961 [Pipeline] // stage 00:01:48.970 [Pipeline] catchError 00:01:48.972 [Pipeline] { 00:01:48.986 [Pipeline] timeout 00:01:48.986 Timeout set to expire in 50 min 00:01:48.988 [Pipeline] { 00:01:49.003 [Pipeline] stage 00:01:49.005 [Pipeline] { (Tests) 00:01:49.020 [Pipeline] sh 00:01:49.340 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:49.340 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:49.340 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:49.340 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:49.340 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.340 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:49.340 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:49.340 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:49.340 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:49.340 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:49.340 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:49.340 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:49.340 + source /etc/os-release 00:01:49.340 ++ NAME='Fedora Linux' 00:01:49.340 ++ VERSION='38 (Cloud Edition)' 00:01:49.340 ++ ID=fedora 00:01:49.340 ++ VERSION_ID=38 00:01:49.340 ++ VERSION_CODENAME= 00:01:49.340 ++ PLATFORM_ID=platform:f38 00:01:49.340 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:49.340 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:49.340 ++ LOGO=fedora-logo-icon 00:01:49.340 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:49.340 ++ HOME_URL=https://fedoraproject.org/ 00:01:49.340 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:49.340 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:49.340 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:49.340 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:49.340 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:49.340 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:49.340 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:49.340 ++ SUPPORT_END=2024-05-14 00:01:49.340 ++ VARIANT='Cloud Edition' 00:01:49.340 ++ VARIANT_ID=cloud 00:01:49.340 + uname -a 00:01:49.340 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:49.340 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:50.273 Hugepages 00:01:50.273 node hugesize free / total 00:01:50.273 node0 1048576kB 0 / 0 00:01:50.273 node0 2048kB 0 / 0 00:01:50.273 node1 1048576kB 0 / 0 00:01:50.273 node1 2048kB 0 / 0 00:01:50.273 00:01:50.273 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:50.273 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:50.273 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:50.273 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:50.273 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:50.273 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:50.273 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:50.273 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:50.273 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:50.273 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:50.273 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:50.273 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:50.273 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:50.273 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:50.273 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:50.273 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:50.273 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:50.273 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:50.530 + rm -f /tmp/spdk-ld-path 00:01:50.530 + source autorun-spdk.conf 00:01:50.530 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.530 ++ SPDK_TEST_NVMF=1 00:01:50.530 ++ SPDK_TEST_NVME_CLI=1 00:01:50.530 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:50.530 ++ SPDK_TEST_NVMF_NICS=e810 00:01:50.530 ++ SPDK_TEST_VFIOUSER=1 00:01:50.530 ++ SPDK_RUN_UBSAN=1 00:01:50.530 ++ NET_TYPE=phy 00:01:50.530 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:50.530 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.530 ++ RUN_NIGHTLY=1 00:01:50.530 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:50.530 + [[ -n '' ]] 00:01:50.530 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.530 + for M in /var/spdk/build-*-manifest.txt 00:01:50.530 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:50.530 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:50.530 + for M in /var/spdk/build-*-manifest.txt 00:01:50.530 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:50.530 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:50.530 ++ uname 00:01:50.530 + [[ Linux == \L\i\n\u\x ]] 00:01:50.530 + sudo dmesg -T 00:01:50.530 + sudo dmesg --clear 00:01:50.530 + dmesg_pid=1388746 00:01:50.530 + [[ Fedora Linux == FreeBSD ]] 00:01:50.530 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:50.530 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:50.530 + sudo dmesg -Tw 00:01:50.530 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:50.530 + [[ -x /usr/src/fio-static/fio ]] 00:01:50.530 + export FIO_BIN=/usr/src/fio-static/fio 00:01:50.530 + FIO_BIN=/usr/src/fio-static/fio 00:01:50.530 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:50.530 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:50.530 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:50.530 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:50.530 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:50.530 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:50.530 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:50.530 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:50.530 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:50.530 Test configuration: 00:01:50.530 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.530 SPDK_TEST_NVMF=1 00:01:50.530 SPDK_TEST_NVME_CLI=1 00:01:50.530 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:50.530 SPDK_TEST_NVMF_NICS=e810 00:01:50.530 SPDK_TEST_VFIOUSER=1 00:01:50.530 SPDK_RUN_UBSAN=1 00:01:50.530 NET_TYPE=phy 00:01:50.530 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:50.530 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.530 RUN_NIGHTLY=1 05:21:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:50.530 05:21:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:50.530 05:21:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:50.530 05:21:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:50.531 05:21:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.531 05:21:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.531 05:21:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.531 05:21:44 -- paths/export.sh@5 -- $ export PATH 00:01:50.531 05:21:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.531 05:21:44 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:50.531 05:21:44 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:50.531 05:21:44 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721877704.XXXXXX 00:01:50.531 05:21:44 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721877704.2sb9Qi 00:01:50.531 05:21:44 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:50.531 05:21:44 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:01:50.531 05:21:44 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.531 05:21:44 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:50.531 05:21:44 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:50.531 05:21:44 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:50.531 05:21:44 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:50.531 05:21:44 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:50.531 05:21:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.531 05:21:44 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:50.531 05:21:44 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:50.531 05:21:44 -- pm/common@17 -- $ local monitor 00:01:50.531 05:21:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.531 05:21:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.531 05:21:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.531 05:21:44 -- pm/common@21 -- $ date +%s 00:01:50.531 05:21:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.531 05:21:44 -- pm/common@21 -- $ date +%s 00:01:50.531 05:21:44 -- pm/common@25 -- $ sleep 1 00:01:50.531 05:21:44 -- pm/common@21 -- $ date +%s 00:01:50.531 05:21:44 -- pm/common@21 -- $ date +%s 00:01:50.531 05:21:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721877704 00:01:50.531 05:21:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721877704 00:01:50.531 05:21:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721877704 00:01:50.531 05:21:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721877704 00:01:50.531 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721877704_collect-vmstat.pm.log 00:01:50.531 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721877704_collect-cpu-load.pm.log 00:01:50.531 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721877704_collect-cpu-temp.pm.log 00:01:50.531 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721877704_collect-bmc-pm.bmc.pm.log 00:01:51.464 05:21:45 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:51.464 05:21:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:51.464 05:21:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:51.464 05:21:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:51.464 05:21:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:51.464 Thu Jul 25 03:21:45 AM UTC 2024 00:01:51.464 05:21:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:51.464 v24.09-pre-318-gd005e023b 00:01:51.464 05:21:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:51.464 05:21:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:51.464 05:21:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:51.464 05:21:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:51.464 05:21:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:51.464 05:21:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.722 ************************************ 00:01:51.722 START TEST ubsan 00:01:51.722 ************************************ 00:01:51.722 05:21:45 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:51.722 using ubsan 00:01:51.722 00:01:51.722 real 0m0.000s 00:01:51.722 user 0m0.000s 00:01:51.722 sys 0m0.000s 00:01:51.722 05:21:45 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:51.722 05:21:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:51.722 ************************************ 00:01:51.722 END TEST ubsan 00:01:51.722 ************************************ 00:01:51.722 05:21:45 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:51.722 05:21:45 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:51.722 05:21:45 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:51.722 05:21:45 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:51.722 05:21:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:51.722 05:21:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.722 ************************************ 00:01:51.722 START TEST build_native_dpdk 00:01:51.722 ************************************ 00:01:51.722 05:21:45 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:51.722 caf0f5d395 version: 22.11.4 00:01:51.722 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:51.722 dc9c799c7d vhost: fix missing spinlock unlock 00:01:51.722 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:51.722 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:51.722 05:21:45 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:51.722 05:21:45 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:51.722 05:21:45 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:51.722 05:21:45 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:51.722 05:21:45 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:51.722 05:21:45 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:51.722 05:21:45 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:51.722 05:21:45 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:51.722 05:21:45 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:51.722 05:21:45 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:51.722 05:21:45 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:51.722 05:21:45 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:51.723 05:21:45 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:51.723 patching file config/rte_config.h 00:01:51.723 Hunk #1 succeeded at 60 (offset 1 line). 00:01:51.723 05:21:45 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:51.723 05:21:45 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:51.723 05:21:45 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:51.723 patching file lib/pcapng/rte_pcapng.c 00:01:51.723 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:51.723 05:21:45 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:51.723 05:21:45 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:51.723 05:21:45 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:51.723 05:21:45 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:51.723 05:21:45 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:55.908 The Meson build system 00:01:55.908 Version: 1.3.1 00:01:55.908 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:55.908 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:55.908 Build type: native build 00:01:55.908 Program cat found: YES (/usr/bin/cat) 00:01:55.908 Project name: DPDK 00:01:55.908 Project version: 22.11.4 00:01:55.908 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:55.908 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:55.908 Host machine cpu family: x86_64 00:01:55.908 Host machine cpu: x86_64 00:01:55.908 Message: ## Building in Developer Mode ## 00:01:55.908 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:55.908 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:55.908 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:55.908 Program objdump found: YES (/usr/bin/objdump) 00:01:55.908 Program python3 found: YES (/usr/bin/python3) 00:01:55.908 Program cat found: YES (/usr/bin/cat) 00:01:55.908 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:55.908 Checking for size of "void *" : 8 00:01:55.908 Checking for size of "void *" : 8 (cached) 00:01:55.908 Library m found: YES 00:01:55.908 Library numa found: YES 00:01:55.908 Has header "numaif.h" : YES 00:01:55.908 Library fdt found: NO 00:01:55.908 Library execinfo found: NO 00:01:55.908 Has header "execinfo.h" : YES 00:01:55.908 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:55.908 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:55.908 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:55.908 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:55.908 Run-time dependency openssl found: YES 3.0.9 00:01:55.908 Run-time dependency libpcap found: YES 1.10.4 00:01:55.908 Has header "pcap.h" with dependency libpcap: YES 00:01:55.908 Compiler for C supports arguments -Wcast-qual: YES 00:01:55.908 Compiler for C supports arguments -Wdeprecated: YES 00:01:55.908 Compiler for C supports arguments -Wformat: YES 00:01:55.908 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:55.908 Compiler for C supports arguments -Wformat-security: NO 00:01:55.908 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:55.908 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:55.908 Compiler for C supports arguments -Wnested-externs: YES 00:01:55.908 Compiler for C supports arguments -Wold-style-definition: YES 00:01:55.908 Compiler for C supports arguments -Wpointer-arith: YES 00:01:55.908 Compiler for C supports arguments -Wsign-compare: YES 00:01:55.908 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:55.908 Compiler for C supports arguments -Wundef: YES 00:01:55.908 Compiler for C supports arguments -Wwrite-strings: YES 00:01:55.908 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:55.908 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:55.908 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:55.908 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:55.908 Compiler for C supports arguments -mavx512f: YES 00:01:55.908 Checking if "AVX512 checking" compiles: YES 00:01:55.908 Fetching value of define "__SSE4_2__" : 1 00:01:55.908 Fetching value of define "__AES__" : 1 00:01:55.908 Fetching value of define "__AVX__" : 1 00:01:55.908 Fetching value of define "__AVX2__" : (undefined) 00:01:55.908 Fetching value of define "__AVX512BW__" : (undefined) 00:01:55.908 Fetching value of define "__AVX512CD__" : (undefined) 00:01:55.908 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:55.908 Fetching value of define "__AVX512F__" : (undefined) 00:01:55.908 Fetching value of define "__AVX512VL__" : (undefined) 00:01:55.908 Fetching value of define "__PCLMUL__" : 1 00:01:55.908 Fetching value of define "__RDRND__" : 1 00:01:55.908 Fetching value of define "__RDSEED__" : (undefined) 00:01:55.908 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:55.908 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:55.908 Message: lib/kvargs: Defining dependency "kvargs" 00:01:55.908 Message: lib/telemetry: Defining dependency "telemetry" 00:01:55.908 Checking for function "getentropy" : YES 00:01:55.908 Message: lib/eal: Defining dependency "eal" 00:01:55.908 Message: lib/ring: Defining dependency "ring" 00:01:55.908 Message: lib/rcu: Defining dependency "rcu" 00:01:55.908 Message: lib/mempool: Defining dependency "mempool" 00:01:55.908 Message: lib/mbuf: Defining dependency "mbuf" 00:01:55.908 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:55.908 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:55.908 Compiler for C supports arguments -mpclmul: YES 00:01:55.908 Compiler for C supports arguments -maes: YES 00:01:55.908 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:55.908 Compiler for C supports arguments -mavx512bw: YES 00:01:55.908 Compiler for C supports arguments -mavx512dq: YES 00:01:55.908 Compiler for C supports arguments -mavx512vl: YES 00:01:55.908 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:55.908 Compiler for C supports arguments -mavx2: YES 00:01:55.908 Compiler for C supports arguments -mavx: YES 00:01:55.908 Message: lib/net: Defining dependency "net" 00:01:55.908 Message: lib/meter: Defining dependency "meter" 00:01:55.908 Message: lib/ethdev: Defining dependency "ethdev" 00:01:55.908 Message: lib/pci: Defining dependency "pci" 00:01:55.908 Message: lib/cmdline: Defining dependency "cmdline" 00:01:55.908 Message: lib/metrics: Defining dependency "metrics" 00:01:55.908 Message: lib/hash: Defining dependency "hash" 00:01:55.908 Message: lib/timer: Defining dependency "timer" 00:01:55.908 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:55.908 Compiler for C supports arguments -mavx2: YES (cached) 00:01:55.908 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:55.908 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:55.909 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:55.909 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:55.909 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:55.909 Message: lib/acl: Defining dependency "acl" 00:01:55.909 Message: lib/bbdev: Defining dependency "bbdev" 00:01:55.909 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:55.909 Run-time dependency libelf found: YES 0.190 00:01:55.909 Message: lib/bpf: Defining dependency "bpf" 00:01:55.909 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:55.909 Message: lib/compressdev: Defining dependency "compressdev" 00:01:55.909 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:55.909 Message: lib/distributor: Defining dependency "distributor" 00:01:55.909 Message: lib/efd: Defining dependency "efd" 00:01:55.909 Message: lib/eventdev: Defining dependency "eventdev" 00:01:55.909 Message: lib/gpudev: Defining dependency "gpudev" 00:01:55.909 Message: lib/gro: Defining dependency "gro" 00:01:55.909 Message: lib/gso: Defining dependency "gso" 00:01:55.909 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:55.909 Message: lib/jobstats: Defining dependency "jobstats" 00:01:55.909 Message: lib/latencystats: Defining dependency "latencystats" 00:01:55.909 Message: lib/lpm: Defining dependency "lpm" 00:01:55.909 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:55.909 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:55.909 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:55.909 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:55.909 Message: lib/member: Defining dependency "member" 00:01:55.909 Message: lib/pcapng: Defining dependency "pcapng" 00:01:55.909 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:55.909 Message: lib/power: Defining dependency "power" 00:01:55.909 Message: lib/rawdev: Defining dependency "rawdev" 00:01:55.909 Message: lib/regexdev: Defining dependency "regexdev" 00:01:55.909 Message: lib/dmadev: Defining dependency "dmadev" 00:01:55.909 Message: lib/rib: Defining dependency "rib" 00:01:55.909 Message: lib/reorder: Defining dependency "reorder" 00:01:55.909 Message: lib/sched: Defining dependency "sched" 00:01:55.909 Message: lib/security: Defining dependency "security" 00:01:55.909 Message: lib/stack: Defining dependency "stack" 00:01:55.909 Has header "linux/userfaultfd.h" : YES 00:01:55.909 Message: lib/vhost: Defining dependency "vhost" 00:01:55.909 Message: lib/ipsec: Defining dependency "ipsec" 00:01:55.909 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:55.909 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:55.909 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:55.909 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:55.909 Message: lib/fib: Defining dependency "fib" 00:01:55.909 Message: lib/port: Defining dependency "port" 00:01:55.909 Message: lib/pdump: Defining dependency "pdump" 00:01:55.909 Message: lib/table: Defining dependency "table" 00:01:55.909 Message: lib/pipeline: Defining dependency "pipeline" 00:01:55.909 Message: lib/graph: Defining dependency "graph" 00:01:55.909 Message: lib/node: Defining dependency "node" 00:01:55.909 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:55.909 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:55.909 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:55.909 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:55.909 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:55.909 Compiler for C supports arguments -Wno-unused-value: YES 00:01:57.290 Compiler for C supports arguments -Wno-format: YES 00:01:57.290 Compiler for C supports arguments -Wno-format-security: YES 00:01:57.290 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:57.290 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:57.290 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:57.290 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:57.290 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:57.290 Compiler for C supports arguments -mavx2: YES (cached) 00:01:57.290 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:57.290 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.290 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:57.290 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:57.290 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:57.290 Program doxygen found: YES (/usr/bin/doxygen) 00:01:57.290 Configuring doxy-api.conf using configuration 00:01:57.290 Program sphinx-build found: NO 00:01:57.290 Configuring rte_build_config.h using configuration 00:01:57.290 Message: 00:01:57.290 ================= 00:01:57.290 Applications Enabled 00:01:57.290 ================= 00:01:57.290 00:01:57.290 apps: 00:01:57.290 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:57.290 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:57.290 test-security-perf, 00:01:57.290 00:01:57.290 Message: 00:01:57.290 ================= 00:01:57.290 Libraries Enabled 00:01:57.290 ================= 00:01:57.290 00:01:57.290 libs: 00:01:57.290 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:57.290 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:57.290 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:57.290 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:57.290 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:57.290 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:57.290 table, pipeline, graph, node, 00:01:57.290 00:01:57.290 Message: 00:01:57.290 =============== 00:01:57.290 Drivers Enabled 00:01:57.290 =============== 00:01:57.290 00:01:57.290 common: 00:01:57.290 00:01:57.290 bus: 00:01:57.290 pci, vdev, 00:01:57.290 mempool: 00:01:57.290 ring, 00:01:57.290 dma: 00:01:57.290 00:01:57.290 net: 00:01:57.290 i40e, 00:01:57.290 raw: 00:01:57.290 00:01:57.290 crypto: 00:01:57.290 00:01:57.290 compress: 00:01:57.290 00:01:57.290 regex: 00:01:57.290 00:01:57.290 vdpa: 00:01:57.290 00:01:57.290 event: 00:01:57.290 00:01:57.290 baseband: 00:01:57.291 00:01:57.291 gpu: 00:01:57.291 00:01:57.291 00:01:57.291 Message: 00:01:57.291 ================= 00:01:57.291 Content Skipped 00:01:57.291 ================= 00:01:57.291 00:01:57.291 apps: 00:01:57.291 00:01:57.291 libs: 00:01:57.291 kni: explicitly disabled via build config (deprecated lib) 00:01:57.291 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:57.291 00:01:57.291 drivers: 00:01:57.291 common/cpt: not in enabled drivers build config 00:01:57.291 common/dpaax: not in enabled drivers build config 00:01:57.291 common/iavf: not in enabled drivers build config 00:01:57.291 common/idpf: not in enabled drivers build config 00:01:57.291 common/mvep: not in enabled drivers build config 00:01:57.291 common/octeontx: not in enabled drivers build config 00:01:57.291 bus/auxiliary: not in enabled drivers build config 00:01:57.291 bus/dpaa: not in enabled drivers build config 00:01:57.291 bus/fslmc: not in enabled drivers build config 00:01:57.291 bus/ifpga: not in enabled drivers build config 00:01:57.291 bus/vmbus: not in enabled drivers build config 00:01:57.291 common/cnxk: not in enabled drivers build config 00:01:57.291 common/mlx5: not in enabled drivers build config 00:01:57.291 common/qat: not in enabled drivers build config 00:01:57.291 common/sfc_efx: not in enabled drivers build config 00:01:57.291 mempool/bucket: not in enabled drivers build config 00:01:57.291 mempool/cnxk: not in enabled drivers build config 00:01:57.291 mempool/dpaa: not in enabled drivers build config 00:01:57.291 mempool/dpaa2: not in enabled drivers build config 00:01:57.291 mempool/octeontx: not in enabled drivers build config 00:01:57.291 mempool/stack: not in enabled drivers build config 00:01:57.291 dma/cnxk: not in enabled drivers build config 00:01:57.291 dma/dpaa: not in enabled drivers build config 00:01:57.291 dma/dpaa2: not in enabled drivers build config 00:01:57.291 dma/hisilicon: not in enabled drivers build config 00:01:57.291 dma/idxd: not in enabled drivers build config 00:01:57.291 dma/ioat: not in enabled drivers build config 00:01:57.291 dma/skeleton: not in enabled drivers build config 00:01:57.291 net/af_packet: not in enabled drivers build config 00:01:57.291 net/af_xdp: not in enabled drivers build config 00:01:57.291 net/ark: not in enabled drivers build config 00:01:57.291 net/atlantic: not in enabled drivers build config 00:01:57.291 net/avp: not in enabled drivers build config 00:01:57.291 net/axgbe: not in enabled drivers build config 00:01:57.291 net/bnx2x: not in enabled drivers build config 00:01:57.291 net/bnxt: not in enabled drivers build config 00:01:57.291 net/bonding: not in enabled drivers build config 00:01:57.291 net/cnxk: not in enabled drivers build config 00:01:57.291 net/cxgbe: not in enabled drivers build config 00:01:57.291 net/dpaa: not in enabled drivers build config 00:01:57.291 net/dpaa2: not in enabled drivers build config 00:01:57.291 net/e1000: not in enabled drivers build config 00:01:57.291 net/ena: not in enabled drivers build config 00:01:57.291 net/enetc: not in enabled drivers build config 00:01:57.291 net/enetfec: not in enabled drivers build config 00:01:57.291 net/enic: not in enabled drivers build config 00:01:57.291 net/failsafe: not in enabled drivers build config 00:01:57.291 net/fm10k: not in enabled drivers build config 00:01:57.291 net/gve: not in enabled drivers build config 00:01:57.291 net/hinic: not in enabled drivers build config 00:01:57.291 net/hns3: not in enabled drivers build config 00:01:57.291 net/iavf: not in enabled drivers build config 00:01:57.291 net/ice: not in enabled drivers build config 00:01:57.291 net/idpf: not in enabled drivers build config 00:01:57.291 net/igc: not in enabled drivers build config 00:01:57.291 net/ionic: not in enabled drivers build config 00:01:57.291 net/ipn3ke: not in enabled drivers build config 00:01:57.291 net/ixgbe: not in enabled drivers build config 00:01:57.291 net/kni: not in enabled drivers build config 00:01:57.291 net/liquidio: not in enabled drivers build config 00:01:57.291 net/mana: not in enabled drivers build config 00:01:57.291 net/memif: not in enabled drivers build config 00:01:57.291 net/mlx4: not in enabled drivers build config 00:01:57.291 net/mlx5: not in enabled drivers build config 00:01:57.291 net/mvneta: not in enabled drivers build config 00:01:57.291 net/mvpp2: not in enabled drivers build config 00:01:57.291 net/netvsc: not in enabled drivers build config 00:01:57.291 net/nfb: not in enabled drivers build config 00:01:57.291 net/nfp: not in enabled drivers build config 00:01:57.291 net/ngbe: not in enabled drivers build config 00:01:57.291 net/null: not in enabled drivers build config 00:01:57.291 net/octeontx: not in enabled drivers build config 00:01:57.291 net/octeon_ep: not in enabled drivers build config 00:01:57.291 net/pcap: not in enabled drivers build config 00:01:57.291 net/pfe: not in enabled drivers build config 00:01:57.291 net/qede: not in enabled drivers build config 00:01:57.291 net/ring: not in enabled drivers build config 00:01:57.291 net/sfc: not in enabled drivers build config 00:01:57.291 net/softnic: not in enabled drivers build config 00:01:57.291 net/tap: not in enabled drivers build config 00:01:57.291 net/thunderx: not in enabled drivers build config 00:01:57.291 net/txgbe: not in enabled drivers build config 00:01:57.291 net/vdev_netvsc: not in enabled drivers build config 00:01:57.291 net/vhost: not in enabled drivers build config 00:01:57.291 net/virtio: not in enabled drivers build config 00:01:57.291 net/vmxnet3: not in enabled drivers build config 00:01:57.291 raw/cnxk_bphy: not in enabled drivers build config 00:01:57.291 raw/cnxk_gpio: not in enabled drivers build config 00:01:57.291 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:57.291 raw/ifpga: not in enabled drivers build config 00:01:57.291 raw/ntb: not in enabled drivers build config 00:01:57.291 raw/skeleton: not in enabled drivers build config 00:01:57.291 crypto/armv8: not in enabled drivers build config 00:01:57.291 crypto/bcmfs: not in enabled drivers build config 00:01:57.291 crypto/caam_jr: not in enabled drivers build config 00:01:57.291 crypto/ccp: not in enabled drivers build config 00:01:57.291 crypto/cnxk: not in enabled drivers build config 00:01:57.291 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.291 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.291 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.291 crypto/mlx5: not in enabled drivers build config 00:01:57.291 crypto/mvsam: not in enabled drivers build config 00:01:57.291 crypto/nitrox: not in enabled drivers build config 00:01:57.291 crypto/null: not in enabled drivers build config 00:01:57.291 crypto/octeontx: not in enabled drivers build config 00:01:57.291 crypto/openssl: not in enabled drivers build config 00:01:57.291 crypto/scheduler: not in enabled drivers build config 00:01:57.291 crypto/uadk: not in enabled drivers build config 00:01:57.291 crypto/virtio: not in enabled drivers build config 00:01:57.291 compress/isal: not in enabled drivers build config 00:01:57.291 compress/mlx5: not in enabled drivers build config 00:01:57.291 compress/octeontx: not in enabled drivers build config 00:01:57.291 compress/zlib: not in enabled drivers build config 00:01:57.291 regex/mlx5: not in enabled drivers build config 00:01:57.291 regex/cn9k: not in enabled drivers build config 00:01:57.291 vdpa/ifc: not in enabled drivers build config 00:01:57.291 vdpa/mlx5: not in enabled drivers build config 00:01:57.291 vdpa/sfc: not in enabled drivers build config 00:01:57.291 event/cnxk: not in enabled drivers build config 00:01:57.291 event/dlb2: not in enabled drivers build config 00:01:57.291 event/dpaa: not in enabled drivers build config 00:01:57.291 event/dpaa2: not in enabled drivers build config 00:01:57.291 event/dsw: not in enabled drivers build config 00:01:57.291 event/opdl: not in enabled drivers build config 00:01:57.291 event/skeleton: not in enabled drivers build config 00:01:57.291 event/sw: not in enabled drivers build config 00:01:57.291 event/octeontx: not in enabled drivers build config 00:01:57.291 baseband/acc: not in enabled drivers build config 00:01:57.291 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:57.291 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:57.291 baseband/la12xx: not in enabled drivers build config 00:01:57.291 baseband/null: not in enabled drivers build config 00:01:57.291 baseband/turbo_sw: not in enabled drivers build config 00:01:57.291 gpu/cuda: not in enabled drivers build config 00:01:57.291 00:01:57.291 00:01:57.291 Build targets in project: 316 00:01:57.291 00:01:57.291 DPDK 22.11.4 00:01:57.291 00:01:57.291 User defined options 00:01:57.291 libdir : lib 00:01:57.291 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.291 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:57.291 c_link_args : 00:01:57.291 enable_docs : false 00:01:57.291 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:57.291 enable_kmods : false 00:01:57.291 machine : native 00:01:57.291 tests : false 00:01:57.291 00:01:57.291 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.291 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:57.291 05:21:50 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:57.291 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:57.291 [1/745] Generating lib/rte_kvargs_def with a custom command 00:01:57.291 [2/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:57.291 [3/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:57.291 [4/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:57.291 [5/745] Generating lib/rte_telemetry_def with a custom command 00:01:57.291 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:57.291 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:57.291 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:57.291 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.291 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:57.291 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:57.291 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:57.292 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:57.292 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:57.292 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:57.292 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:57.551 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:57.551 [18/745] Linking static target lib/librte_kvargs.a 00:01:57.551 [19/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:57.551 [20/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:57.551 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:57.551 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:57.551 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:57.551 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:57.551 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:57.551 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:57.551 [27/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:57.551 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:57.551 [29/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:57.551 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:57.551 [31/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:57.551 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:57.551 [33/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:57.551 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:57.551 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:57.551 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:57.551 [37/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:57.551 [38/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:57.551 [39/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:57.551 [40/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:57.551 [41/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:57.551 [42/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:57.551 [43/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:57.551 [44/745] Generating lib/rte_eal_def with a custom command 00:01:57.551 [45/745] Generating lib/rte_ring_def with a custom command 00:01:57.551 [46/745] Generating lib/rte_eal_mingw with a custom command 00:01:57.551 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:57.551 [48/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:57.551 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:57.551 [50/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:57.551 [51/745] Generating lib/rte_ring_mingw with a custom command 00:01:57.551 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:57.551 [53/745] Generating lib/rte_rcu_def with a custom command 00:01:57.551 [54/745] Generating lib/rte_rcu_mingw with a custom command 00:01:57.551 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:57.551 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:57.551 [57/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:57.551 [58/745] Generating lib/rte_mempool_def with a custom command 00:01:57.551 [59/745] Generating lib/rte_mempool_mingw with a custom command 00:01:57.551 [60/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:57.551 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:57.551 [62/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:57.551 [63/745] Generating lib/rte_mbuf_def with a custom command 00:01:57.551 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:57.551 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:57.813 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:57.813 [67/745] Generating lib/rte_net_mingw with a custom command 00:01:57.813 [68/745] Generating lib/rte_meter_def with a custom command 00:01:57.813 [69/745] Generating lib/rte_net_def with a custom command 00:01:57.813 [70/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:57.813 [71/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:57.813 [72/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:57.813 [73/745] Generating lib/rte_meter_mingw with a custom command 00:01:57.813 [74/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:57.813 [75/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:57.813 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:57.813 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:57.813 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:57.813 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.813 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:57.813 [81/745] Linking static target lib/librte_ring.a 00:01:57.813 [82/745] Linking target lib/librte_kvargs.so.23.0 00:01:57.813 [83/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:57.813 [84/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:57.813 [85/745] Generating lib/rte_pci_def with a custom command 00:01:58.073 [86/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:58.073 [87/745] Linking static target lib/librte_meter.a 00:01:58.073 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.073 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:58.073 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:58.073 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.073 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:58.073 [93/745] Linking static target lib/librte_pci.a 00:01:58.073 [94/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:58.073 [95/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.073 [96/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:58.073 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.073 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:58.335 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.335 [100/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.335 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:58.335 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:58.335 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:58.335 [104/745] Generating lib/rte_cmdline_def with a custom command 00:01:58.335 [105/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:58.335 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:58.335 [107/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.335 [108/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:58.335 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:58.336 [110/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:58.336 [111/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.336 [112/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:58.336 [113/745] Linking static target lib/librte_telemetry.a 00:01:58.336 [114/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:58.336 [115/745] Generating lib/rte_metrics_def with a custom command 00:01:58.336 [116/745] Generating lib/rte_metrics_mingw with a custom command 00:01:58.596 [117/745] Generating lib/rte_hash_def with a custom command 00:01:58.596 [118/745] Generating lib/rte_hash_mingw with a custom command 00:01:58.596 [119/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:58.596 [120/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.596 [121/745] Generating lib/rte_timer_def with a custom command 00:01:58.596 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:58.596 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:58.596 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:58.596 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:58.856 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.856 [127/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.856 [128/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.856 [129/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.856 [130/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:58.856 [131/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.856 [132/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.856 [133/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.856 [134/745] Generating lib/rte_acl_def with a custom command 00:01:58.856 [135/745] Generating lib/rte_bbdev_def with a custom command 00:01:58.856 [136/745] Generating lib/rte_acl_mingw with a custom command 00:01:58.856 [137/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.856 [138/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:58.856 [139/745] Generating lib/rte_bitratestats_def with a custom command 00:01:58.856 [140/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:58.856 [141/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.856 [142/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:59.122 [143/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.122 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:59.122 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:59.122 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:59.122 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:59.122 [148/745] Linking target lib/librte_telemetry.so.23.0 00:01:59.122 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:59.122 [150/745] Generating lib/rte_bpf_def with a custom command 00:01:59.122 [151/745] Generating lib/rte_bpf_mingw with a custom command 00:01:59.122 [152/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:59.122 [153/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:59.122 [154/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:59.122 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:59.122 [156/745] Generating lib/rte_cfgfile_def with a custom command 00:01:59.122 [157/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:59.122 [158/745] Generating lib/rte_compressdev_def with a custom command 00:01:59.122 [159/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:59.122 [160/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:59.384 [161/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:59.384 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:59.384 [163/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:59.384 [164/745] Generating lib/rte_cryptodev_def with a custom command 00:01:59.384 [165/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:59.384 [166/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:59.384 [167/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:59.384 [168/745] Linking static target lib/librte_timer.a 00:01:59.384 [169/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:59.384 [170/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:59.384 [171/745] Linking static target lib/librte_rcu.a 00:01:59.384 [172/745] Generating lib/rte_distributor_def with a custom command 00:01:59.384 [173/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:59.384 [174/745] Generating lib/rte_distributor_mingw with a custom command 00:01:59.384 [175/745] Linking static target lib/librte_cmdline.a 00:01:59.384 [176/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:59.384 [177/745] Generating lib/rte_efd_def with a custom command 00:01:59.384 [178/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:59.384 [179/745] Generating lib/rte_efd_mingw with a custom command 00:01:59.384 [180/745] Linking static target lib/librte_net.a 00:01:59.384 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:59.643 [182/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:59.643 [183/745] Linking static target lib/librte_cfgfile.a 00:01:59.643 [184/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:59.643 [185/745] Linking static target lib/librte_metrics.a 00:01:59.643 [186/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:59.643 [187/745] Linking static target lib/librte_mempool.a 00:01:59.911 [188/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:59.911 [189/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.911 [190/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.911 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.911 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:59.911 [193/745] Generating lib/rte_eventdev_def with a custom command 00:01:59.911 [194/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:59.911 [195/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:59.911 [196/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:59.911 [197/745] Linking static target lib/librte_eal.a 00:02:00.173 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:00.173 [199/745] Generating lib/rte_gpudev_def with a custom command 00:02:00.173 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:00.173 [201/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:00.173 [202/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.173 [203/745] Generating lib/rte_gpudev_mingw with a custom command 00:02:00.173 [204/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:00.173 [205/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:00.173 [206/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:00.173 [207/745] Linking static target lib/librte_bitratestats.a 00:02:00.173 [208/745] Generating lib/rte_gro_def with a custom command 00:02:00.173 [209/745] Generating lib/rte_gro_mingw with a custom command 00:02:00.173 [210/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.436 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:00.436 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:00.436 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:00.436 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:00.436 [215/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:00.436 [216/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.436 [217/745] Generating lib/rte_gso_def with a custom command 00:02:00.698 [218/745] Generating lib/rte_gso_mingw with a custom command 00:02:00.698 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:00.698 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:00.698 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:00.698 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:00.698 [223/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:00.698 [224/745] Linking static target lib/librte_bbdev.a 00:02:00.698 [225/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:00.698 [226/745] Generating lib/rte_ip_frag_def with a custom command 00:02:00.962 [227/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:00.962 [228/745] Generating lib/rte_ip_frag_mingw with a custom command 00:02:00.962 [229/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.962 [230/745] Generating lib/rte_jobstats_mingw with a custom command 00:02:00.962 [231/745] Generating lib/rte_jobstats_def with a custom command 00:02:00.962 [232/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:00.962 [233/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.962 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:02:00.962 [235/745] Generating lib/rte_latencystats_def with a custom command 00:02:00.963 [236/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:00.963 [237/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:00.963 [238/745] Linking static target lib/librte_compressdev.a 00:02:00.963 [239/745] Generating lib/rte_lpm_def with a custom command 00:02:00.963 [240/745] Generating lib/rte_lpm_mingw with a custom command 00:02:00.963 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:00.963 [242/745] Linking static target lib/librte_jobstats.a 00:02:01.226 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:01.226 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:01.226 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:01.226 [246/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:01.226 [247/745] Linking static target lib/librte_distributor.a 00:02:01.486 [248/745] Generating lib/rte_member_def with a custom command 00:02:01.486 [249/745] Generating lib/rte_member_mingw with a custom command 00:02:01.486 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:01.486 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:01.486 [252/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:01.486 [253/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.486 [254/745] Generating lib/rte_pcapng_def with a custom command 00:02:01.486 [255/745] Generating lib/rte_pcapng_mingw with a custom command 00:02:01.486 [256/745] Linking static target lib/librte_bpf.a 00:02:01.749 [257/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:01.749 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:01.749 [259/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:01.749 [260/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:01.749 [261/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.749 [262/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.749 [263/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:01.749 [264/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:01.749 [265/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:01.749 [266/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:01.749 [267/745] Generating lib/rte_power_def with a custom command 00:02:01.749 [268/745] Generating lib/rte_power_mingw with a custom command 00:02:01.749 [269/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:01.749 [270/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:01.749 [271/745] Linking static target lib/librte_gpudev.a 00:02:01.749 [272/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:01.749 [273/745] Generating lib/rte_rawdev_mingw with a custom command 00:02:01.749 [274/745] Generating lib/rte_rawdev_def with a custom command 00:02:01.749 [275/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:02.012 [276/745] Generating lib/rte_regexdev_def with a custom command 00:02:02.012 [277/745] Linking static target lib/librte_gro.a 00:02:02.012 [278/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:02.012 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:02:02.012 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:02.012 [281/745] Generating lib/rte_dmadev_def with a custom command 00:02:02.012 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:02:02.012 [283/745] Generating lib/rte_rib_def with a custom command 00:02:02.012 [284/745] Generating lib/rte_rib_mingw with a custom command 00:02:02.012 [285/745] Generating lib/rte_reorder_def with a custom command 00:02:02.012 [286/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.012 [287/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:02.012 [288/745] Generating lib/rte_reorder_mingw with a custom command 00:02:02.012 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:02.274 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.274 [291/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:02.274 [292/745] Generating lib/rte_sched_def with a custom command 00:02:02.274 [293/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:02.274 [294/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:02.274 [295/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:02.274 [296/745] Generating lib/rte_sched_mingw with a custom command 00:02:02.274 [297/745] Generating lib/rte_security_def with a custom command 00:02:02.274 [298/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:02.274 [299/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:02.274 [300/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:02.274 [301/745] Generating lib/rte_security_mingw with a custom command 00:02:02.536 [302/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:02.536 [303/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.536 [304/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:02.536 [305/745] Generating lib/rte_stack_def with a custom command 00:02:02.536 [306/745] Generating lib/rte_stack_mingw with a custom command 00:02:02.536 [307/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:02.536 [308/745] Linking static target lib/librte_latencystats.a 00:02:02.536 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:02.536 [310/745] Linking static target lib/librte_rawdev.a 00:02:02.536 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:02.536 [312/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:02.536 [313/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:02.536 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:02.536 [315/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:02.536 [316/745] Linking static target lib/librte_stack.a 00:02:02.536 [317/745] Generating lib/rte_vhost_mingw with a custom command 00:02:02.536 [318/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:02.536 [319/745] Generating lib/rte_vhost_def with a custom command 00:02:02.536 [320/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:02.536 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:02.536 [322/745] Linking static target lib/librte_dmadev.a 00:02:02.536 [323/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:02.800 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:02.800 [325/745] Linking static target lib/librte_ip_frag.a 00:02:02.800 [326/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.800 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:02.800 [328/745] Generating lib/rte_ipsec_def with a custom command 00:02:02.800 [329/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.060 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:02:03.060 [331/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:03.060 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:03.060 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:03.060 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.321 [335/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.321 [336/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.321 [337/745] Generating lib/rte_fib_def with a custom command 00:02:03.321 [338/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:03.321 [339/745] Linking static target lib/librte_gso.a 00:02:03.321 [340/745] Generating lib/rte_fib_mingw with a custom command 00:02:03.321 [341/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:03.321 [342/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:03.321 [343/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:03.321 [344/745] Linking static target lib/librte_regexdev.a 00:02:03.321 [345/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:03.585 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.585 [347/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.585 [348/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:03.585 [349/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:03.585 [350/745] Linking static target lib/librte_efd.a 00:02:03.870 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:03.870 [352/745] Linking static target lib/librte_pcapng.a 00:02:03.870 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:03.870 [354/745] Linking static target lib/librte_lpm.a 00:02:03.870 [355/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:03.870 [356/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:03.870 [357/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:03.870 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:03.870 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:03.870 [360/745] Linking static target lib/librte_reorder.a 00:02:04.141 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:04.141 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.141 [363/745] Generating lib/rte_port_def with a custom command 00:02:04.141 [364/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:04.141 [365/745] Generating lib/rte_port_mingw with a custom command 00:02:04.141 [366/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:04.141 [367/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:04.141 [368/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:04.141 [369/745] Generating lib/rte_pdump_def with a custom command 00:02:04.141 [370/745] Generating lib/rte_pdump_mingw with a custom command 00:02:04.141 [371/745] Linking static target lib/acl/libavx2_tmp.a 00:02:04.141 [372/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.141 [373/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:04.141 [374/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:04.141 [375/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:04.401 [376/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:04.401 [377/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:04.401 [378/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:04.401 [379/745] Linking static target lib/librte_security.a 00:02:04.401 [380/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.401 [381/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:04.401 [382/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.401 [383/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:04.401 [384/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:04.401 [385/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:04.401 [386/745] Linking static target lib/librte_power.a 00:02:04.401 [387/745] Linking static target lib/librte_hash.a 00:02:04.663 [388/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.663 [389/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:04.663 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:04.663 [391/745] Linking static target lib/librte_rib.a 00:02:04.663 [392/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:04.663 [393/745] Linking static target lib/acl/libavx512_tmp.a 00:02:04.663 [394/745] Linking static target lib/librte_acl.a 00:02:04.924 [395/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:04.924 [396/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:04.924 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:04.924 [398/745] Generating lib/rte_table_def with a custom command 00:02:04.924 [399/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.924 [400/745] Generating lib/rte_table_mingw with a custom command 00:02:04.924 [401/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:05.194 [402/745] Linking static target lib/librte_ethdev.a 00:02:05.194 [403/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:05.194 [404/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.453 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.453 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:05.453 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:05.453 [408/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:05.453 [409/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:05.453 [410/745] Linking static target lib/librte_mbuf.a 00:02:05.453 [411/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:05.453 [412/745] Generating lib/rte_pipeline_def with a custom command 00:02:05.715 [413/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:05.715 [414/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.715 [415/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:05.715 [416/745] Generating lib/rte_pipeline_mingw with a custom command 00:02:05.715 [417/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:05.715 [418/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:05.715 [419/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:05.715 [420/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:05.715 [421/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:05.715 [422/745] Generating lib/rte_graph_def with a custom command 00:02:05.715 [423/745] Linking static target lib/librte_fib.a 00:02:05.715 [424/745] Generating lib/rte_graph_mingw with a custom command 00:02:05.715 [425/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:05.715 [426/745] Linking static target lib/librte_eventdev.a 00:02:05.715 [427/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:05.715 [428/745] Linking static target lib/librte_member.a 00:02:05.978 [429/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.978 [430/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:05.978 [431/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:05.978 [432/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:05.978 [433/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:05.978 [434/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:05.978 [435/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:05.978 [436/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:05.978 [437/745] Generating lib/rte_node_def with a custom command 00:02:06.242 [438/745] Generating lib/rte_node_mingw with a custom command 00:02:06.242 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:06.242 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.242 [441/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:06.242 [442/745] Linking static target lib/librte_sched.a 00:02:06.242 [443/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:06.242 [444/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:06.242 [445/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.507 [446/745] Generating drivers/rte_bus_pci_def with a custom command 00:02:06.507 [447/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:06.507 [448/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.507 [449/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:06.507 [450/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:06.507 [451/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:06.507 [452/745] Generating drivers/rte_bus_vdev_def with a custom command 00:02:06.507 [453/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:06.507 [454/745] Generating drivers/rte_mempool_ring_def with a custom command 00:02:06.507 [455/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:06.507 [456/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:06.507 [457/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:06.507 [458/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:06.507 [459/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:06.766 [460/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:06.766 [461/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:06.766 [462/745] Linking static target lib/librte_cryptodev.a 00:02:06.766 [463/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:06.766 [464/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:06.766 [465/745] Linking static target lib/librte_pdump.a 00:02:06.766 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:06.766 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:06.766 [468/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:06.766 [469/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:06.766 [470/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:07.030 [471/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:07.030 [472/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:07.030 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:07.030 [474/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.030 [475/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:07.030 [476/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:07.030 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:07.030 [478/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:07.030 [479/745] Generating drivers/rte_net_i40e_def with a custom command 00:02:07.293 [480/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:07.293 [481/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:07.293 [482/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:07.293 [483/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.293 [484/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:07.293 [485/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:07.293 [486/745] Linking static target lib/librte_table.a 00:02:07.293 [487/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.293 [488/745] Linking static target drivers/librte_bus_vdev.a 00:02:07.293 [489/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:07.293 [490/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.293 [491/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:07.559 [492/745] Linking static target lib/librte_ipsec.a 00:02:07.559 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:07.559 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:07.821 [495/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:07.821 [496/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.821 [497/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:07.821 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:07.821 [499/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:07.821 [500/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:08.085 [501/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:08.085 [502/745] Linking static target lib/librte_graph.a 00:02:08.085 [503/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.085 [504/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:08.085 [505/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:08.085 [506/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:08.085 [507/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:08.085 [508/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.085 [509/745] Linking static target drivers/librte_bus_pci.a 00:02:08.085 [510/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:08.085 [511/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.350 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:08.350 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:08.350 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.611 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:08.611 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.611 [517/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:08.611 [518/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.611 [519/745] Linking static target lib/librte_port.a 00:02:08.870 [520/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:08.870 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:08.870 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:09.129 [523/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:09.129 [524/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:09.129 [525/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:09.129 [526/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:09.396 [527/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:09.396 [528/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.396 [529/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:09.396 [530/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:09.657 [531/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:09.657 [532/745] Linking static target drivers/librte_mempool_ring.a 00:02:09.657 [533/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:09.657 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:09.657 [535/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:09.657 [536/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.657 [537/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:09.657 [538/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:09.657 [539/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:09.923 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:10.187 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.187 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:10.187 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:10.187 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:10.453 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:10.453 [546/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:10.453 [547/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:10.453 [548/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:10.713 [549/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:10.713 [550/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:10.713 [551/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:10.980 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:10.980 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:11.239 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:11.239 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:11.239 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:11.239 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:11.499 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:11.499 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:11.499 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:11.760 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:11.760 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:11.760 [563/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:11.760 [564/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:11.760 [565/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:12.021 [566/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:12.021 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:12.021 [568/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:12.021 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:12.021 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:12.021 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:12.281 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:12.281 [573/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:12.281 [574/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:12.544 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:12.544 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:12.544 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:12.544 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:12.544 [579/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.807 [580/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:12.807 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:12.807 [582/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:12.807 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:12.807 [584/745] Linking target lib/librte_eal.so.23.0 00:02:12.807 [585/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:12.807 [586/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:12.807 [587/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.066 [588/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:13.066 [589/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:13.325 [590/745] Linking target lib/librte_ring.so.23.0 00:02:13.325 [591/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:13.325 [592/745] Linking target lib/librte_meter.so.23.0 00:02:13.325 [593/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:13.325 [594/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:13.591 [595/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:13.591 [596/745] Linking target lib/librte_pci.so.23.0 00:02:13.591 [597/745] Linking target lib/librte_rcu.so.23.0 00:02:13.591 [598/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:13.592 [599/745] Linking target lib/librte_mempool.so.23.0 00:02:13.592 [600/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:13.592 [601/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:13.592 [602/745] Linking target lib/librte_timer.so.23.0 00:02:13.592 [603/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:13.592 [604/745] Linking target lib/librte_cfgfile.so.23.0 00:02:13.592 [605/745] Linking target lib/librte_acl.so.23.0 00:02:13.851 [606/745] Linking target lib/librte_jobstats.so.23.0 00:02:13.851 [607/745] Linking target lib/librte_rawdev.so.23.0 00:02:13.851 [608/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:13.851 [609/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:13.851 [610/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:13.851 [611/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:13.851 [612/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:13.851 [613/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:13.851 [614/745] Linking target lib/librte_dmadev.so.23.0 00:02:13.851 [615/745] Linking target lib/librte_stack.so.23.0 00:02:13.851 [616/745] Linking target drivers/librte_bus_pci.so.23.0 00:02:13.851 [617/745] Linking target lib/librte_graph.so.23.0 00:02:13.851 [618/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:13.851 [619/745] Linking target lib/librte_rib.so.23.0 00:02:13.851 [620/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:13.851 [621/745] Linking target lib/librte_mbuf.so.23.0 00:02:13.851 [622/745] Linking target drivers/librte_mempool_ring.so.23.0 00:02:13.851 [623/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:13.851 [624/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:13.851 [625/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:14.110 [626/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:14.110 [627/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:14.110 [628/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:14.110 [629/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:14.110 [630/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:14.110 [631/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:14.110 [632/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:14.110 [633/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:14.110 [634/745] Linking target lib/librte_fib.so.23.0 00:02:14.110 [635/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:14.110 [636/745] Linking target lib/librte_distributor.so.23.0 00:02:14.110 [637/745] Linking target lib/librte_gpudev.so.23.0 00:02:14.110 [638/745] Linking target lib/librte_net.so.23.0 00:02:14.110 [639/745] Linking target lib/librte_sched.so.23.0 00:02:14.110 [640/745] Linking target lib/librte_cryptodev.so.23.0 00:02:14.110 [641/745] Linking target lib/librte_reorder.so.23.0 00:02:14.110 [642/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:14.110 [643/745] Linking target lib/librte_bbdev.so.23.0 00:02:14.110 [644/745] Linking target lib/librte_compressdev.so.23.0 00:02:14.110 [645/745] Linking target lib/librte_regexdev.so.23.0 00:02:14.110 [646/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:14.369 [647/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:14.369 [648/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:14.369 [649/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:14.369 [650/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:14.369 [651/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:14.369 [652/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:14.369 [653/745] Linking target lib/librte_security.so.23.0 00:02:14.369 [654/745] Linking target lib/librte_ethdev.so.23.0 00:02:14.369 [655/745] Linking target lib/librte_hash.so.23.0 00:02:14.369 [656/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:14.369 [657/745] Linking target lib/librte_cmdline.so.23.0 00:02:14.369 [658/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:14.369 [659/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:14.369 [660/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:14.627 [661/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:14.627 [662/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:14.627 [663/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:14.627 [664/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:14.627 [665/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:14.627 [666/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:14.627 [667/745] Linking target lib/librte_efd.so.23.0 00:02:14.627 [668/745] Linking target lib/librte_member.so.23.0 00:02:14.627 [669/745] Linking target lib/librte_metrics.so.23.0 00:02:14.627 [670/745] Linking target lib/librte_pcapng.so.23.0 00:02:14.627 [671/745] Linking target lib/librte_power.so.23.0 00:02:14.627 [672/745] Linking target lib/librte_gro.so.23.0 00:02:14.627 [673/745] Linking target lib/librte_lpm.so.23.0 00:02:14.627 [674/745] Linking target lib/librte_gso.so.23.0 00:02:14.627 [675/745] Linking target lib/librte_ipsec.so.23.0 00:02:14.627 [676/745] Linking target lib/librte_ip_frag.so.23.0 00:02:14.627 [677/745] Linking target lib/librte_bpf.so.23.0 00:02:14.627 [678/745] Linking target lib/librte_eventdev.so.23.0 00:02:14.627 [679/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:14.627 [680/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:14.627 [681/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:14.885 [682/745] Linking target lib/librte_latencystats.so.23.0 00:02:14.885 [683/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:14.885 [684/745] Linking target lib/librte_bitratestats.so.23.0 00:02:14.885 [685/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:14.885 [686/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:14.885 [687/745] Linking target lib/librte_pdump.so.23.0 00:02:14.885 [688/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:14.885 [689/745] Linking target lib/librte_port.so.23.0 00:02:14.885 [690/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:14.885 [691/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:14.885 [692/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:15.145 [693/745] Linking target lib/librte_table.so.23.0 00:02:15.145 [694/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:15.145 [695/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:15.145 [696/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:15.145 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:15.411 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:15.669 [699/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:15.927 [700/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:15.927 [701/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:15.927 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:15.927 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:16.185 [704/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:16.185 [705/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:16.185 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:16.185 [707/745] Linking static target drivers/librte_net_i40e.a 00:02:16.442 [708/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:16.442 [709/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:16.700 [710/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.958 [711/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:16.958 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:02:17.891 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:17.891 [714/745] Linking static target lib/librte_node.a 00:02:17.892 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.892 [716/745] Linking target lib/librte_node.so.23.0 00:02:18.457 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:18.457 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:19.390 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:27.522 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.605 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:59.605 [722/745] Linking static target lib/librte_vhost.a 00:02:59.605 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.605 [724/745] Linking target lib/librte_vhost.so.23.0 00:03:14.500 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:14.500 [726/745] Linking static target lib/librte_pipeline.a 00:03:15.435 [727/745] Linking target app/dpdk-test-cmdline 00:03:15.435 [728/745] Linking target app/dpdk-proc-info 00:03:15.435 [729/745] Linking target app/dpdk-pdump 00:03:15.435 [730/745] Linking target app/dpdk-test-fib 00:03:15.435 [731/745] Linking target app/dpdk-test-gpudev 00:03:15.435 [732/745] Linking target app/dpdk-test-regex 00:03:15.435 [733/745] Linking target app/dpdk-dumpcap 00:03:15.435 [734/745] Linking target app/dpdk-test-sad 00:03:15.435 [735/745] Linking target app/dpdk-test-acl 00:03:15.435 [736/745] Linking target app/dpdk-test-security-perf 00:03:15.435 [737/745] Linking target app/dpdk-test-pipeline 00:03:15.435 [738/745] Linking target app/dpdk-test-flow-perf 00:03:15.435 [739/745] Linking target app/dpdk-test-bbdev 00:03:15.435 [740/745] Linking target app/dpdk-test-eventdev 00:03:15.435 [741/745] Linking target app/dpdk-test-crypto-perf 00:03:15.435 [742/745] Linking target app/dpdk-test-compress-perf 00:03:15.435 [743/745] Linking target app/dpdk-testpmd 00:03:17.337 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.337 [745/745] Linking target lib/librte_pipeline.so.23.0 00:03:17.337 05:23:10 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:17.337 05:23:10 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:17.337 05:23:10 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:17.337 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:17.337 [0/1] Installing files. 00:03:17.600 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:17.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:17.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:17.605 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:17.606 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:17.606 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:17.606 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:17.606 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:17.606 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:17.606 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:17.606 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:17.606 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:17.606 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.176 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.177 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.177 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.177 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.177 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.177 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.177 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.177 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.177 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.177 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:18.177 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.177 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:18.177 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.177 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:18.177 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.177 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:18.177 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:18.180 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:18.181 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:18.181 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:18.181 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:18.181 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:18.181 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:18.181 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:18.181 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:18.181 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:18.181 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:18.181 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:18.181 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:18.181 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:18.181 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:18.181 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:18.181 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:18.181 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:18.181 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:18.181 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:18.181 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:18.181 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:18.181 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:18.181 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:18.181 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:18.181 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:18.181 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:18.181 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:18.181 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:18.181 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:18.181 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:18.181 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:18.181 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:18.181 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:18.181 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:18.181 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:18.181 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:18.181 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:18.181 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:18.181 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:18.181 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:18.181 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:18.181 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:18.181 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:18.181 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:18.181 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:18.181 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:18.181 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:18.181 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:18.181 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:18.181 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:18.181 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:18.181 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:18.181 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:18.181 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:18.181 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:18.181 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:18.181 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:18.181 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:18.181 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:18.181 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:18.181 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:18.181 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:18.181 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:18.181 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:18.181 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:18.181 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:18.181 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:18.181 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:18.181 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:18.181 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:18.181 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:18.181 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:18.181 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:18.181 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:18.182 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:18.182 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:18.182 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:18.182 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:18.182 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:18.182 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:18.182 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:18.182 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:18.182 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:18.182 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:18.182 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:18.182 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:18.182 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:18.182 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:18.182 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:18.182 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:18.182 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:18.182 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:18.182 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:18.182 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:18.182 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:18.182 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:18.182 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:18.182 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:18.182 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:18.182 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:18.182 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:18.182 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:18.182 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:18.182 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:18.182 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:18.182 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:18.182 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:18.182 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:18.182 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:18.182 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:18.182 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:18.182 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:18.182 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:18.182 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:18.182 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:18.182 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:18.182 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:18.182 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:18.182 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:18.182 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:18.182 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:18.182 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:18.182 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:18.182 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:18.182 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:18.182 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:18.182 05:23:11 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:03:18.182 05:23:11 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:18.182 00:03:18.182 real 1m26.506s 00:03:18.182 user 14m28.982s 00:03:18.182 sys 1m48.315s 00:03:18.182 05:23:11 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:18.182 05:23:11 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:18.182 ************************************ 00:03:18.182 END TEST build_native_dpdk 00:03:18.182 ************************************ 00:03:18.182 05:23:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:18.182 05:23:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:18.182 05:23:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:18.182 05:23:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:18.182 05:23:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:18.182 05:23:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:18.182 05:23:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:18.182 05:23:11 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:18.182 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:18.441 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:18.441 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:18.441 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:18.698 Using 'verbs' RDMA provider 00:03:29.238 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:37.387 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:37.387 Creating mk/config.mk...done. 00:03:37.387 Creating mk/cc.flags.mk...done. 00:03:37.387 Type 'make' to build. 00:03:37.387 05:23:31 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:37.387 05:23:31 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:37.387 05:23:31 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:37.387 05:23:31 -- common/autotest_common.sh@10 -- $ set +x 00:03:37.644 ************************************ 00:03:37.644 START TEST make 00:03:37.644 ************************************ 00:03:37.644 05:23:31 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:37.904 make[1]: Nothing to be done for 'all'. 00:03:39.293 The Meson build system 00:03:39.293 Version: 1.3.1 00:03:39.293 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:39.293 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:39.293 Build type: native build 00:03:39.293 Project name: libvfio-user 00:03:39.293 Project version: 0.0.1 00:03:39.293 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:39.293 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:39.293 Host machine cpu family: x86_64 00:03:39.293 Host machine cpu: x86_64 00:03:39.293 Run-time dependency threads found: YES 00:03:39.293 Library dl found: YES 00:03:39.293 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:39.293 Run-time dependency json-c found: YES 0.17 00:03:39.294 Run-time dependency cmocka found: YES 1.1.7 00:03:39.294 Program pytest-3 found: NO 00:03:39.294 Program flake8 found: NO 00:03:39.294 Program misspell-fixer found: NO 00:03:39.294 Program restructuredtext-lint found: NO 00:03:39.294 Program valgrind found: YES (/usr/bin/valgrind) 00:03:39.294 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:39.294 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:39.294 Compiler for C supports arguments -Wwrite-strings: YES 00:03:39.294 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:39.294 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:39.294 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:39.294 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:39.294 Build targets in project: 8 00:03:39.294 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:39.294 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:39.294 00:03:39.294 libvfio-user 0.0.1 00:03:39.294 00:03:39.294 User defined options 00:03:39.294 buildtype : debug 00:03:39.294 default_library: shared 00:03:39.294 libdir : /usr/local/lib 00:03:39.294 00:03:39.294 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:40.252 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:40.252 [1/37] Compiling C object samples/null.p/null.c.o 00:03:40.252 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:40.252 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:40.252 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:40.252 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:40.252 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:40.252 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:40.252 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:40.252 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:40.252 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:40.252 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:40.252 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:40.252 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:40.252 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:40.516 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:40.516 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:40.516 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:40.516 [18/37] Compiling C object samples/server.p/server.c.o 00:03:40.516 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:40.516 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:40.516 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:40.516 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:40.516 [23/37] Compiling C object samples/client.p/client.c.o 00:03:40.516 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:40.516 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:40.516 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:40.516 [27/37] Linking target samples/client 00:03:40.516 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:40.516 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:40.778 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:40.778 [31/37] Linking target test/unit_tests 00:03:40.778 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:40.778 [33/37] Linking target samples/null 00:03:41.039 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:41.039 [35/37] Linking target samples/gpio-pci-idio-16 00:03:41.039 [36/37] Linking target samples/lspci 00:03:41.039 [37/37] Linking target samples/server 00:03:41.039 INFO: autodetecting backend as ninja 00:03:41.039 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:41.039 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:41.619 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:41.619 ninja: no work to do. 00:03:53.836 CC lib/ut/ut.o 00:03:53.836 CC lib/ut_mock/mock.o 00:03:53.836 CC lib/log/log.o 00:03:53.836 CC lib/log/log_flags.o 00:03:53.836 CC lib/log/log_deprecated.o 00:03:53.836 LIB libspdk_log.a 00:03:53.836 LIB libspdk_ut.a 00:03:53.836 LIB libspdk_ut_mock.a 00:03:53.836 SO libspdk_ut_mock.so.6.0 00:03:53.836 SO libspdk_ut.so.2.0 00:03:53.836 SO libspdk_log.so.7.0 00:03:53.836 SYMLINK libspdk_ut_mock.so 00:03:53.836 SYMLINK libspdk_ut.so 00:03:53.836 SYMLINK libspdk_log.so 00:03:53.836 CXX lib/trace_parser/trace.o 00:03:53.836 CC lib/dma/dma.o 00:03:53.836 CC lib/ioat/ioat.o 00:03:53.836 CC lib/util/base64.o 00:03:53.836 CC lib/util/bit_array.o 00:03:53.836 CC lib/util/cpuset.o 00:03:53.836 CC lib/util/crc16.o 00:03:53.836 CC lib/util/crc32.o 00:03:53.836 CC lib/util/crc32c.o 00:03:53.836 CC lib/util/crc32_ieee.o 00:03:53.836 CC lib/util/crc64.o 00:03:53.836 CC lib/util/dif.o 00:03:53.836 CC lib/util/fd.o 00:03:53.836 CC lib/util/fd_group.o 00:03:53.836 CC lib/util/file.o 00:03:53.836 CC lib/util/hexlify.o 00:03:53.836 CC lib/util/iov.o 00:03:53.836 CC lib/util/math.o 00:03:53.836 CC lib/util/net.o 00:03:53.836 CC lib/util/pipe.o 00:03:53.836 CC lib/util/strerror_tls.o 00:03:53.836 CC lib/util/string.o 00:03:53.836 CC lib/util/uuid.o 00:03:53.836 CC lib/util/xor.o 00:03:53.836 CC lib/util/zipf.o 00:03:53.836 CC lib/vfio_user/host/vfio_user_pci.o 00:03:53.836 CC lib/vfio_user/host/vfio_user.o 00:03:53.836 LIB libspdk_dma.a 00:03:53.836 SO libspdk_dma.so.4.0 00:03:53.836 SYMLINK libspdk_dma.so 00:03:53.836 LIB libspdk_ioat.a 00:03:53.836 SO libspdk_ioat.so.7.0 00:03:53.836 SYMLINK libspdk_ioat.so 00:03:54.092 LIB libspdk_vfio_user.a 00:03:54.092 SO libspdk_vfio_user.so.5.0 00:03:54.092 SYMLINK libspdk_vfio_user.so 00:03:54.092 LIB libspdk_util.a 00:03:54.350 SO libspdk_util.so.10.0 00:03:54.350 SYMLINK libspdk_util.so 00:03:54.607 CC lib/env_dpdk/env.o 00:03:54.607 CC lib/vmd/vmd.o 00:03:54.607 CC lib/rdma_provider/common.o 00:03:54.607 CC lib/json/json_parse.o 00:03:54.607 CC lib/rdma_utils/rdma_utils.o 00:03:54.607 CC lib/conf/conf.o 00:03:54.607 CC lib/idxd/idxd.o 00:03:54.607 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:54.607 CC lib/json/json_util.o 00:03:54.607 CC lib/vmd/led.o 00:03:54.607 CC lib/env_dpdk/memory.o 00:03:54.607 CC lib/idxd/idxd_user.o 00:03:54.607 CC lib/json/json_write.o 00:03:54.607 CC lib/env_dpdk/pci.o 00:03:54.607 CC lib/idxd/idxd_kernel.o 00:03:54.607 CC lib/env_dpdk/init.o 00:03:54.607 CC lib/env_dpdk/threads.o 00:03:54.607 CC lib/env_dpdk/pci_ioat.o 00:03:54.607 CC lib/env_dpdk/pci_virtio.o 00:03:54.607 CC lib/env_dpdk/pci_vmd.o 00:03:54.607 CC lib/env_dpdk/pci_idxd.o 00:03:54.607 CC lib/env_dpdk/pci_event.o 00:03:54.607 CC lib/env_dpdk/sigbus_handler.o 00:03:54.607 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:54.607 CC lib/env_dpdk/pci_dpdk.o 00:03:54.607 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:54.607 LIB libspdk_trace_parser.a 00:03:54.607 SO libspdk_trace_parser.so.5.0 00:03:54.607 SYMLINK libspdk_trace_parser.so 00:03:54.865 LIB libspdk_rdma_provider.a 00:03:54.865 SO libspdk_rdma_provider.so.6.0 00:03:54.865 LIB libspdk_rdma_utils.a 00:03:54.865 SYMLINK libspdk_rdma_provider.so 00:03:54.865 SO libspdk_rdma_utils.so.1.0 00:03:54.865 LIB libspdk_json.a 00:03:54.865 LIB libspdk_conf.a 00:03:54.865 SO libspdk_json.so.6.0 00:03:54.865 SO libspdk_conf.so.6.0 00:03:54.865 SYMLINK libspdk_rdma_utils.so 00:03:54.865 SYMLINK libspdk_conf.so 00:03:54.865 SYMLINK libspdk_json.so 00:03:55.122 CC lib/jsonrpc/jsonrpc_server.o 00:03:55.122 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:55.122 CC lib/jsonrpc/jsonrpc_client.o 00:03:55.122 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:55.122 LIB libspdk_idxd.a 00:03:55.122 SO libspdk_idxd.so.12.0 00:03:55.122 SYMLINK libspdk_idxd.so 00:03:55.379 LIB libspdk_vmd.a 00:03:55.379 SO libspdk_vmd.so.6.0 00:03:55.379 SYMLINK libspdk_vmd.so 00:03:55.379 LIB libspdk_jsonrpc.a 00:03:55.380 SO libspdk_jsonrpc.so.6.0 00:03:55.380 SYMLINK libspdk_jsonrpc.so 00:03:55.637 CC lib/rpc/rpc.o 00:03:55.894 LIB libspdk_rpc.a 00:03:55.894 SO libspdk_rpc.so.6.0 00:03:55.894 SYMLINK libspdk_rpc.so 00:03:56.152 CC lib/notify/notify.o 00:03:56.152 CC lib/keyring/keyring.o 00:03:56.152 CC lib/trace/trace.o 00:03:56.152 CC lib/notify/notify_rpc.o 00:03:56.152 CC lib/keyring/keyring_rpc.o 00:03:56.152 CC lib/trace/trace_flags.o 00:03:56.152 CC lib/trace/trace_rpc.o 00:03:56.451 LIB libspdk_notify.a 00:03:56.451 SO libspdk_notify.so.6.0 00:03:56.451 SYMLINK libspdk_notify.so 00:03:56.451 LIB libspdk_keyring.a 00:03:56.451 LIB libspdk_trace.a 00:03:56.451 SO libspdk_keyring.so.1.0 00:03:56.451 SO libspdk_trace.so.10.0 00:03:56.451 SYMLINK libspdk_keyring.so 00:03:56.451 SYMLINK libspdk_trace.so 00:03:56.451 LIB libspdk_env_dpdk.a 00:03:56.709 SO libspdk_env_dpdk.so.15.0 00:03:56.709 CC lib/sock/sock.o 00:03:56.709 CC lib/thread/thread.o 00:03:56.709 CC lib/sock/sock_rpc.o 00:03:56.709 CC lib/thread/iobuf.o 00:03:56.709 SYMLINK libspdk_env_dpdk.so 00:03:56.967 LIB libspdk_sock.a 00:03:56.967 SO libspdk_sock.so.10.0 00:03:57.226 SYMLINK libspdk_sock.so 00:03:57.226 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:57.226 CC lib/nvme/nvme_ctrlr.o 00:03:57.226 CC lib/nvme/nvme_fabric.o 00:03:57.226 CC lib/nvme/nvme_ns_cmd.o 00:03:57.226 CC lib/nvme/nvme_ns.o 00:03:57.226 CC lib/nvme/nvme_pcie_common.o 00:03:57.226 CC lib/nvme/nvme_pcie.o 00:03:57.226 CC lib/nvme/nvme_qpair.o 00:03:57.226 CC lib/nvme/nvme.o 00:03:57.226 CC lib/nvme/nvme_quirks.o 00:03:57.226 CC lib/nvme/nvme_transport.o 00:03:57.226 CC lib/nvme/nvme_discovery.o 00:03:57.226 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:57.226 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:57.226 CC lib/nvme/nvme_tcp.o 00:03:57.226 CC lib/nvme/nvme_opal.o 00:03:57.226 CC lib/nvme/nvme_io_msg.o 00:03:57.226 CC lib/nvme/nvme_poll_group.o 00:03:57.226 CC lib/nvme/nvme_zns.o 00:03:57.226 CC lib/nvme/nvme_stubs.o 00:03:57.226 CC lib/nvme/nvme_auth.o 00:03:57.226 CC lib/nvme/nvme_cuse.o 00:03:57.226 CC lib/nvme/nvme_rdma.o 00:03:57.226 CC lib/nvme/nvme_vfio_user.o 00:03:58.161 LIB libspdk_thread.a 00:03:58.161 SO libspdk_thread.so.10.1 00:03:58.419 SYMLINK libspdk_thread.so 00:03:58.419 CC lib/vfu_tgt/tgt_endpoint.o 00:03:58.419 CC lib/blob/blobstore.o 00:03:58.419 CC lib/init/json_config.o 00:03:58.419 CC lib/accel/accel.o 00:03:58.419 CC lib/virtio/virtio.o 00:03:58.419 CC lib/init/subsystem.o 00:03:58.419 CC lib/vfu_tgt/tgt_rpc.o 00:03:58.419 CC lib/accel/accel_rpc.o 00:03:58.419 CC lib/blob/request.o 00:03:58.419 CC lib/virtio/virtio_vhost_user.o 00:03:58.419 CC lib/accel/accel_sw.o 00:03:58.419 CC lib/init/subsystem_rpc.o 00:03:58.419 CC lib/virtio/virtio_vfio_user.o 00:03:58.419 CC lib/blob/zeroes.o 00:03:58.419 CC lib/init/rpc.o 00:03:58.419 CC lib/virtio/virtio_pci.o 00:03:58.419 CC lib/blob/blob_bs_dev.o 00:03:58.677 LIB libspdk_init.a 00:03:58.677 SO libspdk_init.so.5.0 00:03:58.936 LIB libspdk_virtio.a 00:03:58.936 LIB libspdk_vfu_tgt.a 00:03:58.936 SYMLINK libspdk_init.so 00:03:58.936 SO libspdk_virtio.so.7.0 00:03:58.936 SO libspdk_vfu_tgt.so.3.0 00:03:58.936 SYMLINK libspdk_vfu_tgt.so 00:03:58.936 SYMLINK libspdk_virtio.so 00:03:58.936 CC lib/event/app.o 00:03:58.936 CC lib/event/reactor.o 00:03:58.936 CC lib/event/log_rpc.o 00:03:58.936 CC lib/event/app_rpc.o 00:03:58.936 CC lib/event/scheduler_static.o 00:03:59.502 LIB libspdk_event.a 00:03:59.502 SO libspdk_event.so.14.0 00:03:59.502 LIB libspdk_accel.a 00:03:59.502 SYMLINK libspdk_event.so 00:03:59.502 SO libspdk_accel.so.16.0 00:03:59.502 SYMLINK libspdk_accel.so 00:03:59.760 LIB libspdk_nvme.a 00:03:59.760 CC lib/bdev/bdev.o 00:03:59.760 CC lib/bdev/bdev_rpc.o 00:03:59.760 CC lib/bdev/bdev_zone.o 00:03:59.760 CC lib/bdev/part.o 00:03:59.760 CC lib/bdev/scsi_nvme.o 00:03:59.760 SO libspdk_nvme.so.13.1 00:04:00.017 SYMLINK libspdk_nvme.so 00:04:01.390 LIB libspdk_blob.a 00:04:01.390 SO libspdk_blob.so.11.0 00:04:01.648 SYMLINK libspdk_blob.so 00:04:01.648 CC lib/blobfs/blobfs.o 00:04:01.648 CC lib/blobfs/tree.o 00:04:01.648 CC lib/lvol/lvol.o 00:04:02.213 LIB libspdk_bdev.a 00:04:02.213 SO libspdk_bdev.so.16.0 00:04:02.474 SYMLINK libspdk_bdev.so 00:04:02.474 LIB libspdk_blobfs.a 00:04:02.474 SO libspdk_blobfs.so.10.0 00:04:02.474 CC lib/nbd/nbd.o 00:04:02.474 CC lib/ublk/ublk.o 00:04:02.474 CC lib/scsi/dev.o 00:04:02.474 CC lib/nvmf/ctrlr.o 00:04:02.474 CC lib/nbd/nbd_rpc.o 00:04:02.474 CC lib/scsi/lun.o 00:04:02.474 CC lib/ublk/ublk_rpc.o 00:04:02.474 CC lib/ftl/ftl_core.o 00:04:02.474 CC lib/nvmf/ctrlr_discovery.o 00:04:02.474 CC lib/scsi/port.o 00:04:02.474 CC lib/nvmf/ctrlr_bdev.o 00:04:02.474 CC lib/ftl/ftl_init.o 00:04:02.474 CC lib/scsi/scsi.o 00:04:02.474 CC lib/nvmf/subsystem.o 00:04:02.474 CC lib/ftl/ftl_layout.o 00:04:02.474 CC lib/scsi/scsi_bdev.o 00:04:02.474 CC lib/ftl/ftl_debug.o 00:04:02.474 CC lib/scsi/scsi_pr.o 00:04:02.474 CC lib/nvmf/nvmf.o 00:04:02.474 CC lib/ftl/ftl_io.o 00:04:02.474 CC lib/nvmf/nvmf_rpc.o 00:04:02.474 CC lib/ftl/ftl_sb.o 00:04:02.474 CC lib/nvmf/transport.o 00:04:02.474 CC lib/scsi/scsi_rpc.o 00:04:02.474 CC lib/scsi/task.o 00:04:02.474 CC lib/ftl/ftl_l2p.o 00:04:02.474 CC lib/nvmf/tcp.o 00:04:02.474 CC lib/ftl/ftl_l2p_flat.o 00:04:02.474 CC lib/nvmf/stubs.o 00:04:02.474 CC lib/ftl/ftl_nv_cache.o 00:04:02.474 CC lib/nvmf/mdns_server.o 00:04:02.474 CC lib/ftl/ftl_band.o 00:04:02.474 CC lib/nvmf/vfio_user.o 00:04:02.474 CC lib/nvmf/rdma.o 00:04:02.474 CC lib/ftl/ftl_band_ops.o 00:04:02.474 CC lib/nvmf/auth.o 00:04:02.474 CC lib/ftl/ftl_writer.o 00:04:02.474 CC lib/ftl/ftl_rq.o 00:04:02.474 CC lib/ftl/ftl_reloc.o 00:04:02.474 CC lib/ftl/ftl_l2p_cache.o 00:04:02.474 CC lib/ftl/ftl_p2l.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:02.741 SYMLINK libspdk_blobfs.so 00:04:02.741 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.741 LIB libspdk_lvol.a 00:04:02.741 SO libspdk_lvol.so.10.0 00:04:03.001 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:03.001 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:03.001 SYMLINK libspdk_lvol.so 00:04:03.001 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:03.001 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:03.001 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:03.001 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:03.001 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:03.001 CC lib/ftl/utils/ftl_conf.o 00:04:03.001 CC lib/ftl/utils/ftl_md.o 00:04:03.001 CC lib/ftl/utils/ftl_mempool.o 00:04:03.001 CC lib/ftl/utils/ftl_bitmap.o 00:04:03.001 CC lib/ftl/utils/ftl_property.o 00:04:03.001 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:03.001 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:03.001 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:03.001 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:03.001 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:03.001 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:03.258 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:03.258 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:03.258 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:03.259 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:03.259 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:03.259 CC lib/ftl/base/ftl_base_dev.o 00:04:03.259 CC lib/ftl/base/ftl_base_bdev.o 00:04:03.259 CC lib/ftl/ftl_trace.o 00:04:03.259 LIB libspdk_nbd.a 00:04:03.516 SO libspdk_nbd.so.7.0 00:04:03.516 LIB libspdk_scsi.a 00:04:03.516 SYMLINK libspdk_nbd.so 00:04:03.516 SO libspdk_scsi.so.9.0 00:04:03.516 LIB libspdk_ublk.a 00:04:03.516 SO libspdk_ublk.so.3.0 00:04:03.516 SYMLINK libspdk_scsi.so 00:04:03.773 SYMLINK libspdk_ublk.so 00:04:03.773 CC lib/iscsi/conn.o 00:04:03.773 CC lib/iscsi/init_grp.o 00:04:03.773 CC lib/iscsi/iscsi.o 00:04:03.773 CC lib/vhost/vhost.o 00:04:03.773 CC lib/iscsi/md5.o 00:04:03.773 CC lib/vhost/vhost_rpc.o 00:04:03.773 CC lib/vhost/vhost_scsi.o 00:04:03.773 CC lib/iscsi/param.o 00:04:03.773 CC lib/vhost/vhost_blk.o 00:04:03.773 CC lib/vhost/rte_vhost_user.o 00:04:03.773 CC lib/iscsi/portal_grp.o 00:04:03.773 CC lib/iscsi/tgt_node.o 00:04:03.773 CC lib/iscsi/iscsi_subsystem.o 00:04:03.773 CC lib/iscsi/iscsi_rpc.o 00:04:03.773 CC lib/iscsi/task.o 00:04:04.030 LIB libspdk_ftl.a 00:04:04.288 SO libspdk_ftl.so.9.0 00:04:04.546 SYMLINK libspdk_ftl.so 00:04:05.112 LIB libspdk_vhost.a 00:04:05.112 SO libspdk_vhost.so.8.0 00:04:05.112 SYMLINK libspdk_vhost.so 00:04:05.112 LIB libspdk_nvmf.a 00:04:05.112 LIB libspdk_iscsi.a 00:04:05.370 SO libspdk_nvmf.so.19.0 00:04:05.370 SO libspdk_iscsi.so.8.0 00:04:05.370 SYMLINK libspdk_iscsi.so 00:04:05.370 SYMLINK libspdk_nvmf.so 00:04:05.629 CC module/vfu_device/vfu_virtio.o 00:04:05.629 CC module/env_dpdk/env_dpdk_rpc.o 00:04:05.629 CC module/vfu_device/vfu_virtio_blk.o 00:04:05.629 CC module/vfu_device/vfu_virtio_scsi.o 00:04:05.629 CC module/vfu_device/vfu_virtio_rpc.o 00:04:05.887 CC module/accel/dsa/accel_dsa.o 00:04:05.887 CC module/accel/dsa/accel_dsa_rpc.o 00:04:05.887 CC module/accel/iaa/accel_iaa.o 00:04:05.887 CC module/keyring/file/keyring.o 00:04:05.887 CC module/keyring/file/keyring_rpc.o 00:04:05.887 CC module/accel/iaa/accel_iaa_rpc.o 00:04:05.887 CC module/accel/error/accel_error.o 00:04:05.887 CC module/accel/ioat/accel_ioat.o 00:04:05.887 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:05.887 CC module/accel/error/accel_error_rpc.o 00:04:05.887 CC module/keyring/linux/keyring.o 00:04:05.887 CC module/sock/posix/posix.o 00:04:05.887 CC module/accel/ioat/accel_ioat_rpc.o 00:04:05.887 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:05.887 CC module/blob/bdev/blob_bdev.o 00:04:05.887 CC module/keyring/linux/keyring_rpc.o 00:04:05.887 CC module/scheduler/gscheduler/gscheduler.o 00:04:05.887 LIB libspdk_env_dpdk_rpc.a 00:04:05.887 SO libspdk_env_dpdk_rpc.so.6.0 00:04:05.887 SYMLINK libspdk_env_dpdk_rpc.so 00:04:05.887 LIB libspdk_keyring_file.a 00:04:05.887 LIB libspdk_keyring_linux.a 00:04:05.887 LIB libspdk_scheduler_gscheduler.a 00:04:05.887 LIB libspdk_scheduler_dpdk_governor.a 00:04:06.145 SO libspdk_keyring_file.so.1.0 00:04:06.145 SO libspdk_keyring_linux.so.1.0 00:04:06.145 SO libspdk_scheduler_gscheduler.so.4.0 00:04:06.145 LIB libspdk_accel_error.a 00:04:06.145 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:06.145 LIB libspdk_accel_ioat.a 00:04:06.145 LIB libspdk_scheduler_dynamic.a 00:04:06.145 LIB libspdk_accel_iaa.a 00:04:06.145 SO libspdk_accel_error.so.2.0 00:04:06.145 SO libspdk_accel_ioat.so.6.0 00:04:06.145 SO libspdk_scheduler_dynamic.so.4.0 00:04:06.145 SYMLINK libspdk_scheduler_gscheduler.so 00:04:06.145 SYMLINK libspdk_keyring_file.so 00:04:06.145 SYMLINK libspdk_keyring_linux.so 00:04:06.145 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:06.145 SO libspdk_accel_iaa.so.3.0 00:04:06.145 LIB libspdk_accel_dsa.a 00:04:06.145 SYMLINK libspdk_accel_error.so 00:04:06.145 SYMLINK libspdk_scheduler_dynamic.so 00:04:06.145 LIB libspdk_blob_bdev.a 00:04:06.145 SYMLINK libspdk_accel_ioat.so 00:04:06.145 SO libspdk_accel_dsa.so.5.0 00:04:06.145 SYMLINK libspdk_accel_iaa.so 00:04:06.145 SO libspdk_blob_bdev.so.11.0 00:04:06.145 SYMLINK libspdk_accel_dsa.so 00:04:06.145 SYMLINK libspdk_blob_bdev.so 00:04:06.403 LIB libspdk_vfu_device.a 00:04:06.403 SO libspdk_vfu_device.so.3.0 00:04:06.403 CC module/bdev/gpt/gpt.o 00:04:06.403 CC module/bdev/null/bdev_null.o 00:04:06.403 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:06.403 CC module/bdev/lvol/vbdev_lvol.o 00:04:06.403 CC module/bdev/delay/vbdev_delay.o 00:04:06.403 CC module/bdev/null/bdev_null_rpc.o 00:04:06.403 CC module/bdev/gpt/vbdev_gpt.o 00:04:06.403 CC module/bdev/error/vbdev_error.o 00:04:06.403 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:06.403 CC module/bdev/split/vbdev_split.o 00:04:06.403 CC module/bdev/split/vbdev_split_rpc.o 00:04:06.403 CC module/bdev/error/vbdev_error_rpc.o 00:04:06.403 CC module/bdev/malloc/bdev_malloc.o 00:04:06.403 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:06.403 CC module/bdev/raid/bdev_raid.o 00:04:06.403 CC module/bdev/raid/bdev_raid_rpc.o 00:04:06.403 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:06.403 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:06.403 CC module/bdev/raid/bdev_raid_sb.o 00:04:06.403 CC module/bdev/raid/raid0.o 00:04:06.403 CC module/blobfs/bdev/blobfs_bdev.o 00:04:06.403 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:06.403 CC module/bdev/passthru/vbdev_passthru.o 00:04:06.403 CC module/bdev/raid/concat.o 00:04:06.403 CC module/bdev/raid/raid1.o 00:04:06.403 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:06.403 CC module/bdev/ftl/bdev_ftl.o 00:04:06.403 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:06.403 CC module/bdev/nvme/bdev_nvme.o 00:04:06.403 CC module/bdev/aio/bdev_aio.o 00:04:06.403 CC module/bdev/iscsi/bdev_iscsi.o 00:04:06.403 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:06.403 CC module/bdev/aio/bdev_aio_rpc.o 00:04:06.403 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:06.403 CC module/bdev/nvme/nvme_rpc.o 00:04:06.403 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:06.403 CC module/bdev/nvme/bdev_mdns_client.o 00:04:06.403 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:06.403 CC module/bdev/nvme/vbdev_opal.o 00:04:06.403 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:06.403 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:06.403 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:06.664 SYMLINK libspdk_vfu_device.so 00:04:06.664 LIB libspdk_sock_posix.a 00:04:06.664 SO libspdk_sock_posix.so.6.0 00:04:06.922 SYMLINK libspdk_sock_posix.so 00:04:06.922 LIB libspdk_bdev_null.a 00:04:06.922 LIB libspdk_blobfs_bdev.a 00:04:06.922 SO libspdk_bdev_null.so.6.0 00:04:06.922 SO libspdk_blobfs_bdev.so.6.0 00:04:06.922 LIB libspdk_bdev_split.a 00:04:06.922 LIB libspdk_bdev_ftl.a 00:04:06.922 LIB libspdk_bdev_gpt.a 00:04:06.922 SO libspdk_bdev_split.so.6.0 00:04:06.922 SO libspdk_bdev_gpt.so.6.0 00:04:06.922 SO libspdk_bdev_ftl.so.6.0 00:04:06.922 SYMLINK libspdk_blobfs_bdev.so 00:04:06.922 LIB libspdk_bdev_error.a 00:04:06.922 SYMLINK libspdk_bdev_null.so 00:04:06.922 SO libspdk_bdev_error.so.6.0 00:04:06.922 SYMLINK libspdk_bdev_split.so 00:04:06.922 LIB libspdk_bdev_aio.a 00:04:06.922 SYMLINK libspdk_bdev_ftl.so 00:04:06.922 LIB libspdk_bdev_passthru.a 00:04:06.922 SYMLINK libspdk_bdev_gpt.so 00:04:06.922 LIB libspdk_bdev_zone_block.a 00:04:07.180 SO libspdk_bdev_aio.so.6.0 00:04:07.180 SYMLINK libspdk_bdev_error.so 00:04:07.180 SO libspdk_bdev_passthru.so.6.0 00:04:07.180 LIB libspdk_bdev_iscsi.a 00:04:07.180 SO libspdk_bdev_zone_block.so.6.0 00:04:07.180 LIB libspdk_bdev_malloc.a 00:04:07.180 SO libspdk_bdev_iscsi.so.6.0 00:04:07.180 SYMLINK libspdk_bdev_aio.so 00:04:07.180 SO libspdk_bdev_malloc.so.6.0 00:04:07.180 LIB libspdk_bdev_delay.a 00:04:07.180 SYMLINK libspdk_bdev_passthru.so 00:04:07.180 SYMLINK libspdk_bdev_zone_block.so 00:04:07.180 SO libspdk_bdev_delay.so.6.0 00:04:07.180 SYMLINK libspdk_bdev_iscsi.so 00:04:07.180 SYMLINK libspdk_bdev_malloc.so 00:04:07.180 LIB libspdk_bdev_lvol.a 00:04:07.180 SYMLINK libspdk_bdev_delay.so 00:04:07.180 SO libspdk_bdev_lvol.so.6.0 00:04:07.180 LIB libspdk_bdev_virtio.a 00:04:07.180 SO libspdk_bdev_virtio.so.6.0 00:04:07.180 SYMLINK libspdk_bdev_lvol.so 00:04:07.438 SYMLINK libspdk_bdev_virtio.so 00:04:07.696 LIB libspdk_bdev_raid.a 00:04:07.696 SO libspdk_bdev_raid.so.6.0 00:04:07.696 SYMLINK libspdk_bdev_raid.so 00:04:09.109 LIB libspdk_bdev_nvme.a 00:04:09.109 SO libspdk_bdev_nvme.so.7.0 00:04:09.109 SYMLINK libspdk_bdev_nvme.so 00:04:09.367 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:09.367 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:09.367 CC module/event/subsystems/vmd/vmd.o 00:04:09.367 CC module/event/subsystems/scheduler/scheduler.o 00:04:09.367 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:09.367 CC module/event/subsystems/keyring/keyring.o 00:04:09.367 CC module/event/subsystems/iobuf/iobuf.o 00:04:09.367 CC module/event/subsystems/sock/sock.o 00:04:09.367 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:09.367 LIB libspdk_event_keyring.a 00:04:09.625 LIB libspdk_event_vhost_blk.a 00:04:09.625 LIB libspdk_event_scheduler.a 00:04:09.625 LIB libspdk_event_vfu_tgt.a 00:04:09.625 LIB libspdk_event_vmd.a 00:04:09.625 LIB libspdk_event_sock.a 00:04:09.625 SO libspdk_event_keyring.so.1.0 00:04:09.625 SO libspdk_event_vhost_blk.so.3.0 00:04:09.625 LIB libspdk_event_iobuf.a 00:04:09.625 SO libspdk_event_scheduler.so.4.0 00:04:09.625 SO libspdk_event_vfu_tgt.so.3.0 00:04:09.625 SO libspdk_event_vmd.so.6.0 00:04:09.625 SO libspdk_event_sock.so.5.0 00:04:09.625 SO libspdk_event_iobuf.so.3.0 00:04:09.625 SYMLINK libspdk_event_keyring.so 00:04:09.625 SYMLINK libspdk_event_vhost_blk.so 00:04:09.625 SYMLINK libspdk_event_scheduler.so 00:04:09.625 SYMLINK libspdk_event_vfu_tgt.so 00:04:09.625 SYMLINK libspdk_event_sock.so 00:04:09.625 SYMLINK libspdk_event_vmd.so 00:04:09.625 SYMLINK libspdk_event_iobuf.so 00:04:09.882 CC module/event/subsystems/accel/accel.o 00:04:09.882 LIB libspdk_event_accel.a 00:04:09.882 SO libspdk_event_accel.so.6.0 00:04:09.882 SYMLINK libspdk_event_accel.so 00:04:10.140 CC module/event/subsystems/bdev/bdev.o 00:04:10.398 LIB libspdk_event_bdev.a 00:04:10.398 SO libspdk_event_bdev.so.6.0 00:04:10.398 SYMLINK libspdk_event_bdev.so 00:04:10.655 CC module/event/subsystems/ublk/ublk.o 00:04:10.655 CC module/event/subsystems/nbd/nbd.o 00:04:10.655 CC module/event/subsystems/scsi/scsi.o 00:04:10.655 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:10.655 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:10.655 LIB libspdk_event_ublk.a 00:04:10.655 LIB libspdk_event_nbd.a 00:04:10.655 LIB libspdk_event_scsi.a 00:04:10.655 SO libspdk_event_ublk.so.3.0 00:04:10.655 SO libspdk_event_nbd.so.6.0 00:04:10.912 SO libspdk_event_scsi.so.6.0 00:04:10.912 SYMLINK libspdk_event_ublk.so 00:04:10.912 SYMLINK libspdk_event_nbd.so 00:04:10.912 SYMLINK libspdk_event_scsi.so 00:04:10.912 LIB libspdk_event_nvmf.a 00:04:10.912 SO libspdk_event_nvmf.so.6.0 00:04:10.912 SYMLINK libspdk_event_nvmf.so 00:04:10.912 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:10.912 CC module/event/subsystems/iscsi/iscsi.o 00:04:11.171 LIB libspdk_event_vhost_scsi.a 00:04:11.171 SO libspdk_event_vhost_scsi.so.3.0 00:04:11.171 LIB libspdk_event_iscsi.a 00:04:11.171 SO libspdk_event_iscsi.so.6.0 00:04:11.171 SYMLINK libspdk_event_vhost_scsi.so 00:04:11.171 SYMLINK libspdk_event_iscsi.so 00:04:11.429 SO libspdk.so.6.0 00:04:11.429 SYMLINK libspdk.so 00:04:11.429 CC app/trace_record/trace_record.o 00:04:11.429 CXX app/trace/trace.o 00:04:11.429 CC app/spdk_top/spdk_top.o 00:04:11.429 CC test/rpc_client/rpc_client_test.o 00:04:11.429 CC app/spdk_nvme_discover/discovery_aer.o 00:04:11.429 CC app/spdk_nvme_perf/perf.o 00:04:11.695 CC app/spdk_nvme_identify/identify.o 00:04:11.695 CC app/spdk_lspci/spdk_lspci.o 00:04:11.695 TEST_HEADER include/spdk/accel.h 00:04:11.695 TEST_HEADER include/spdk/accel_module.h 00:04:11.695 TEST_HEADER include/spdk/assert.h 00:04:11.695 TEST_HEADER include/spdk/barrier.h 00:04:11.695 TEST_HEADER include/spdk/base64.h 00:04:11.695 TEST_HEADER include/spdk/bdev.h 00:04:11.695 TEST_HEADER include/spdk/bdev_module.h 00:04:11.695 TEST_HEADER include/spdk/bdev_zone.h 00:04:11.695 TEST_HEADER include/spdk/bit_array.h 00:04:11.695 TEST_HEADER include/spdk/bit_pool.h 00:04:11.695 TEST_HEADER include/spdk/blob_bdev.h 00:04:11.695 TEST_HEADER include/spdk/blobfs.h 00:04:11.695 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:11.695 TEST_HEADER include/spdk/blob.h 00:04:11.695 TEST_HEADER include/spdk/conf.h 00:04:11.695 TEST_HEADER include/spdk/config.h 00:04:11.695 TEST_HEADER include/spdk/cpuset.h 00:04:11.695 TEST_HEADER include/spdk/crc32.h 00:04:11.695 TEST_HEADER include/spdk/crc16.h 00:04:11.695 TEST_HEADER include/spdk/crc64.h 00:04:11.695 TEST_HEADER include/spdk/dif.h 00:04:11.695 TEST_HEADER include/spdk/dma.h 00:04:11.695 TEST_HEADER include/spdk/endian.h 00:04:11.695 TEST_HEADER include/spdk/env_dpdk.h 00:04:11.695 TEST_HEADER include/spdk/env.h 00:04:11.695 TEST_HEADER include/spdk/event.h 00:04:11.695 TEST_HEADER include/spdk/fd_group.h 00:04:11.695 TEST_HEADER include/spdk/fd.h 00:04:11.695 TEST_HEADER include/spdk/file.h 00:04:11.695 TEST_HEADER include/spdk/ftl.h 00:04:11.695 TEST_HEADER include/spdk/gpt_spec.h 00:04:11.695 TEST_HEADER include/spdk/hexlify.h 00:04:11.695 TEST_HEADER include/spdk/histogram_data.h 00:04:11.695 TEST_HEADER include/spdk/idxd.h 00:04:11.695 TEST_HEADER include/spdk/init.h 00:04:11.695 TEST_HEADER include/spdk/idxd_spec.h 00:04:11.695 TEST_HEADER include/spdk/ioat.h 00:04:11.695 TEST_HEADER include/spdk/ioat_spec.h 00:04:11.695 TEST_HEADER include/spdk/iscsi_spec.h 00:04:11.695 TEST_HEADER include/spdk/json.h 00:04:11.695 TEST_HEADER include/spdk/jsonrpc.h 00:04:11.695 TEST_HEADER include/spdk/keyring.h 00:04:11.695 TEST_HEADER include/spdk/keyring_module.h 00:04:11.695 TEST_HEADER include/spdk/likely.h 00:04:11.695 TEST_HEADER include/spdk/log.h 00:04:11.695 TEST_HEADER include/spdk/lvol.h 00:04:11.695 TEST_HEADER include/spdk/memory.h 00:04:11.695 TEST_HEADER include/spdk/mmio.h 00:04:11.695 TEST_HEADER include/spdk/nbd.h 00:04:11.695 TEST_HEADER include/spdk/net.h 00:04:11.695 TEST_HEADER include/spdk/notify.h 00:04:11.695 TEST_HEADER include/spdk/nvme.h 00:04:11.695 TEST_HEADER include/spdk/nvme_intel.h 00:04:11.695 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:11.695 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:11.695 TEST_HEADER include/spdk/nvme_spec.h 00:04:11.695 TEST_HEADER include/spdk/nvme_zns.h 00:04:11.695 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:11.695 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:11.695 TEST_HEADER include/spdk/nvmf.h 00:04:11.695 TEST_HEADER include/spdk/nvmf_spec.h 00:04:11.695 TEST_HEADER include/spdk/nvmf_transport.h 00:04:11.695 TEST_HEADER include/spdk/opal.h 00:04:11.695 TEST_HEADER include/spdk/opal_spec.h 00:04:11.695 TEST_HEADER include/spdk/pci_ids.h 00:04:11.695 TEST_HEADER include/spdk/pipe.h 00:04:11.695 TEST_HEADER include/spdk/queue.h 00:04:11.695 TEST_HEADER include/spdk/reduce.h 00:04:11.695 TEST_HEADER include/spdk/scheduler.h 00:04:11.695 TEST_HEADER include/spdk/rpc.h 00:04:11.695 TEST_HEADER include/spdk/scsi.h 00:04:11.695 TEST_HEADER include/spdk/sock.h 00:04:11.695 TEST_HEADER include/spdk/scsi_spec.h 00:04:11.695 TEST_HEADER include/spdk/stdinc.h 00:04:11.695 TEST_HEADER include/spdk/string.h 00:04:11.695 TEST_HEADER include/spdk/thread.h 00:04:11.695 TEST_HEADER include/spdk/trace.h 00:04:11.695 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:11.695 TEST_HEADER include/spdk/tree.h 00:04:11.695 TEST_HEADER include/spdk/trace_parser.h 00:04:11.695 TEST_HEADER include/spdk/ublk.h 00:04:11.695 TEST_HEADER include/spdk/uuid.h 00:04:11.695 TEST_HEADER include/spdk/util.h 00:04:11.695 TEST_HEADER include/spdk/version.h 00:04:11.695 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:11.695 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:11.695 TEST_HEADER include/spdk/vhost.h 00:04:11.695 TEST_HEADER include/spdk/vmd.h 00:04:11.695 TEST_HEADER include/spdk/xor.h 00:04:11.695 TEST_HEADER include/spdk/zipf.h 00:04:11.695 CXX test/cpp_headers/accel.o 00:04:11.695 CXX test/cpp_headers/accel_module.o 00:04:11.695 CXX test/cpp_headers/assert.o 00:04:11.695 CXX test/cpp_headers/barrier.o 00:04:11.695 CXX test/cpp_headers/base64.o 00:04:11.695 CXX test/cpp_headers/bdev.o 00:04:11.695 CXX test/cpp_headers/bdev_module.o 00:04:11.695 CC app/spdk_dd/spdk_dd.o 00:04:11.695 CXX test/cpp_headers/bit_array.o 00:04:11.695 CXX test/cpp_headers/bdev_zone.o 00:04:11.695 CXX test/cpp_headers/bit_pool.o 00:04:11.695 CXX test/cpp_headers/blob_bdev.o 00:04:11.695 CXX test/cpp_headers/blobfs_bdev.o 00:04:11.695 CXX test/cpp_headers/blobfs.o 00:04:11.695 CXX test/cpp_headers/blob.o 00:04:11.695 CXX test/cpp_headers/conf.o 00:04:11.695 CXX test/cpp_headers/config.o 00:04:11.695 CC app/iscsi_tgt/iscsi_tgt.o 00:04:11.695 CXX test/cpp_headers/cpuset.o 00:04:11.695 CXX test/cpp_headers/crc16.o 00:04:11.695 CC app/nvmf_tgt/nvmf_main.o 00:04:11.695 CXX test/cpp_headers/crc32.o 00:04:11.695 CC examples/util/zipf/zipf.o 00:04:11.695 CC test/app/histogram_perf/histogram_perf.o 00:04:11.695 CC examples/ioat/perf/perf.o 00:04:11.695 CC app/spdk_tgt/spdk_tgt.o 00:04:11.695 CC examples/ioat/verify/verify.o 00:04:11.695 CC test/env/memory/memory_ut.o 00:04:11.695 CC app/fio/nvme/fio_plugin.o 00:04:11.695 CC test/thread/poller_perf/poller_perf.o 00:04:11.695 CC test/env/vtophys/vtophys.o 00:04:11.695 CC test/env/pci/pci_ut.o 00:04:11.695 CC test/app/stub/stub.o 00:04:11.695 CC test/app/jsoncat/jsoncat.o 00:04:11.695 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:11.695 CC test/dma/test_dma/test_dma.o 00:04:11.695 CC app/fio/bdev/fio_plugin.o 00:04:11.695 CC test/app/bdev_svc/bdev_svc.o 00:04:11.960 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:11.960 CC test/env/mem_callbacks/mem_callbacks.o 00:04:11.960 LINK spdk_lspci 00:04:11.960 LINK rpc_client_test 00:04:11.960 LINK spdk_nvme_discover 00:04:11.960 LINK zipf 00:04:11.960 LINK interrupt_tgt 00:04:11.960 LINK jsoncat 00:04:11.960 CXX test/cpp_headers/crc64.o 00:04:11.960 LINK vtophys 00:04:11.960 CXX test/cpp_headers/dif.o 00:04:11.960 LINK histogram_perf 00:04:11.960 CXX test/cpp_headers/dma.o 00:04:11.960 LINK poller_perf 00:04:11.960 LINK nvmf_tgt 00:04:12.220 CXX test/cpp_headers/endian.o 00:04:12.220 CXX test/cpp_headers/env_dpdk.o 00:04:12.220 CXX test/cpp_headers/env.o 00:04:12.220 CXX test/cpp_headers/event.o 00:04:12.220 CXX test/cpp_headers/fd_group.o 00:04:12.220 CXX test/cpp_headers/fd.o 00:04:12.220 LINK env_dpdk_post_init 00:04:12.220 CXX test/cpp_headers/file.o 00:04:12.220 CXX test/cpp_headers/ftl.o 00:04:12.220 CXX test/cpp_headers/gpt_spec.o 00:04:12.220 LINK spdk_trace_record 00:04:12.220 LINK stub 00:04:12.220 LINK iscsi_tgt 00:04:12.220 CXX test/cpp_headers/hexlify.o 00:04:12.220 LINK verify 00:04:12.220 LINK ioat_perf 00:04:12.220 CXX test/cpp_headers/histogram_data.o 00:04:12.220 CXX test/cpp_headers/idxd.o 00:04:12.220 CXX test/cpp_headers/idxd_spec.o 00:04:12.220 LINK spdk_tgt 00:04:12.220 LINK bdev_svc 00:04:12.220 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:12.220 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:12.220 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:12.220 CXX test/cpp_headers/init.o 00:04:12.220 LINK mem_callbacks 00:04:12.220 CXX test/cpp_headers/ioat.o 00:04:12.483 CXX test/cpp_headers/ioat_spec.o 00:04:12.483 CXX test/cpp_headers/iscsi_spec.o 00:04:12.483 CXX test/cpp_headers/json.o 00:04:12.483 CXX test/cpp_headers/jsonrpc.o 00:04:12.483 LINK spdk_trace 00:04:12.483 LINK spdk_dd 00:04:12.483 CXX test/cpp_headers/keyring.o 00:04:12.483 CXX test/cpp_headers/keyring_module.o 00:04:12.483 CXX test/cpp_headers/likely.o 00:04:12.483 CXX test/cpp_headers/log.o 00:04:12.483 CXX test/cpp_headers/lvol.o 00:04:12.483 CXX test/cpp_headers/memory.o 00:04:12.483 CXX test/cpp_headers/mmio.o 00:04:12.483 CXX test/cpp_headers/nbd.o 00:04:12.483 LINK pci_ut 00:04:12.483 CXX test/cpp_headers/net.o 00:04:12.483 CXX test/cpp_headers/notify.o 00:04:12.483 CXX test/cpp_headers/nvme.o 00:04:12.483 CXX test/cpp_headers/nvme_intel.o 00:04:12.483 CXX test/cpp_headers/nvme_ocssd.o 00:04:12.483 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:12.483 CXX test/cpp_headers/nvme_spec.o 00:04:12.483 CXX test/cpp_headers/nvme_zns.o 00:04:12.483 CXX test/cpp_headers/nvmf_cmd.o 00:04:12.483 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:12.483 CXX test/cpp_headers/nvmf.o 00:04:12.483 CXX test/cpp_headers/nvmf_spec.o 00:04:12.483 CXX test/cpp_headers/nvmf_transport.o 00:04:12.483 CXX test/cpp_headers/opal.o 00:04:12.483 LINK test_dma 00:04:12.745 CXX test/cpp_headers/opal_spec.o 00:04:12.745 CC examples/sock/hello_world/hello_sock.o 00:04:12.745 LINK nvme_fuzz 00:04:12.745 CC examples/vmd/led/led.o 00:04:12.745 CC examples/vmd/lsvmd/lsvmd.o 00:04:12.745 CXX test/cpp_headers/pci_ids.o 00:04:12.745 CC examples/thread/thread/thread_ex.o 00:04:12.745 CC test/event/reactor/reactor.o 00:04:12.745 LINK spdk_nvme 00:04:12.745 CC test/event/event_perf/event_perf.o 00:04:12.745 CXX test/cpp_headers/pipe.o 00:04:12.745 CC examples/idxd/perf/perf.o 00:04:12.745 CXX test/cpp_headers/queue.o 00:04:12.745 CC test/event/reactor_perf/reactor_perf.o 00:04:12.745 CXX test/cpp_headers/reduce.o 00:04:12.745 LINK spdk_bdev 00:04:12.745 CXX test/cpp_headers/rpc.o 00:04:12.745 CXX test/cpp_headers/scheduler.o 00:04:12.745 CXX test/cpp_headers/scsi.o 00:04:12.745 CXX test/cpp_headers/scsi_spec.o 00:04:12.745 CXX test/cpp_headers/sock.o 00:04:13.004 CXX test/cpp_headers/stdinc.o 00:04:13.004 CXX test/cpp_headers/string.o 00:04:13.004 CXX test/cpp_headers/thread.o 00:04:13.004 CC test/event/app_repeat/app_repeat.o 00:04:13.004 CXX test/cpp_headers/trace.o 00:04:13.004 CXX test/cpp_headers/trace_parser.o 00:04:13.004 CXX test/cpp_headers/tree.o 00:04:13.004 CC test/event/scheduler/scheduler.o 00:04:13.004 CXX test/cpp_headers/ublk.o 00:04:13.004 CXX test/cpp_headers/util.o 00:04:13.004 CXX test/cpp_headers/uuid.o 00:04:13.004 CXX test/cpp_headers/version.o 00:04:13.004 CXX test/cpp_headers/vfio_user_pci.o 00:04:13.004 CXX test/cpp_headers/vfio_user_spec.o 00:04:13.004 LINK lsvmd 00:04:13.004 CXX test/cpp_headers/vhost.o 00:04:13.004 CXX test/cpp_headers/vmd.o 00:04:13.004 CXX test/cpp_headers/xor.o 00:04:13.004 CXX test/cpp_headers/zipf.o 00:04:13.004 CC app/vhost/vhost.o 00:04:13.004 LINK led 00:04:13.004 LINK vhost_fuzz 00:04:13.004 LINK reactor 00:04:13.004 LINK event_perf 00:04:13.004 LINK memory_ut 00:04:13.004 LINK reactor_perf 00:04:13.004 LINK spdk_nvme_identify 00:04:13.004 LINK spdk_nvme_perf 00:04:13.263 LINK hello_sock 00:04:13.263 LINK app_repeat 00:04:13.263 LINK spdk_top 00:04:13.263 LINK thread 00:04:13.263 CC test/nvme/sgl/sgl.o 00:04:13.263 CC test/nvme/e2edp/nvme_dp.o 00:04:13.263 CC test/nvme/startup/startup.o 00:04:13.263 CC test/nvme/simple_copy/simple_copy.o 00:04:13.263 CC test/nvme/reset/reset.o 00:04:13.263 CC test/nvme/err_injection/err_injection.o 00:04:13.263 CC test/nvme/aer/aer.o 00:04:13.263 CC test/nvme/reserve/reserve.o 00:04:13.263 CC test/nvme/overhead/overhead.o 00:04:13.263 CC test/nvme/connect_stress/connect_stress.o 00:04:13.263 CC test/nvme/boot_partition/boot_partition.o 00:04:13.263 CC test/accel/dif/dif.o 00:04:13.263 CC test/blobfs/mkfs/mkfs.o 00:04:13.263 CC test/nvme/compliance/nvme_compliance.o 00:04:13.521 CC test/nvme/fused_ordering/fused_ordering.o 00:04:13.521 LINK vhost 00:04:13.521 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:13.521 CC test/nvme/cuse/cuse.o 00:04:13.521 CC test/nvme/fdp/fdp.o 00:04:13.521 CC test/lvol/esnap/esnap.o 00:04:13.521 LINK scheduler 00:04:13.521 LINK idxd_perf 00:04:13.521 LINK startup 00:04:13.521 LINK connect_stress 00:04:13.521 LINK reserve 00:04:13.521 CC examples/nvme/arbitration/arbitration.o 00:04:13.521 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:13.521 CC examples/nvme/abort/abort.o 00:04:13.521 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:13.521 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:13.521 CC examples/nvme/hotplug/hotplug.o 00:04:13.521 CC examples/nvme/reconnect/reconnect.o 00:04:13.521 LINK doorbell_aers 00:04:13.779 CC examples/nvme/hello_world/hello_world.o 00:04:13.779 LINK simple_copy 00:04:13.779 LINK mkfs 00:04:13.779 LINK boot_partition 00:04:13.779 LINK aer 00:04:13.779 LINK sgl 00:04:13.779 LINK err_injection 00:04:13.779 LINK reset 00:04:13.779 LINK overhead 00:04:13.779 CC examples/accel/perf/accel_perf.o 00:04:13.779 LINK fused_ordering 00:04:13.779 LINK nvme_compliance 00:04:13.779 LINK nvme_dp 00:04:13.779 CC examples/blob/cli/blobcli.o 00:04:13.779 CC examples/blob/hello_world/hello_blob.o 00:04:14.036 LINK cmb_copy 00:04:14.036 LINK fdp 00:04:14.036 LINK pmr_persistence 00:04:14.036 LINK dif 00:04:14.036 LINK hotplug 00:04:14.036 LINK arbitration 00:04:14.036 LINK hello_world 00:04:14.036 LINK reconnect 00:04:14.036 LINK abort 00:04:14.294 LINK hello_blob 00:04:14.294 LINK nvme_manage 00:04:14.294 LINK accel_perf 00:04:14.294 LINK blobcli 00:04:14.294 CC test/bdev/bdevio/bdevio.o 00:04:14.859 CC examples/bdev/hello_world/hello_bdev.o 00:04:14.859 CC examples/bdev/bdevperf/bdevperf.o 00:04:14.859 LINK iscsi_fuzz 00:04:14.859 LINK bdevio 00:04:14.859 LINK hello_bdev 00:04:14.859 LINK cuse 00:04:15.425 LINK bdevperf 00:04:15.990 CC examples/nvmf/nvmf/nvmf.o 00:04:16.248 LINK nvmf 00:04:18.805 LINK esnap 00:04:18.805 00:04:18.805 real 0m41.312s 00:04:18.805 user 7m23.798s 00:04:18.805 sys 1m49.129s 00:04:18.805 05:24:12 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:18.805 05:24:12 make -- common/autotest_common.sh@10 -- $ set +x 00:04:18.805 ************************************ 00:04:18.805 END TEST make 00:04:18.805 ************************************ 00:04:18.805 05:24:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:18.805 05:24:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:18.805 05:24:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:18.805 05:24:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.805 05:24:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:18.805 05:24:12 -- pm/common@44 -- $ pid=1388783 00:04:18.805 05:24:12 -- pm/common@50 -- $ kill -TERM 1388783 00:04:18.805 05:24:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.805 05:24:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:18.805 05:24:12 -- pm/common@44 -- $ pid=1388785 00:04:18.805 05:24:12 -- pm/common@50 -- $ kill -TERM 1388785 00:04:18.805 05:24:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.805 05:24:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:18.805 05:24:12 -- pm/common@44 -- $ pid=1388787 00:04:18.805 05:24:12 -- pm/common@50 -- $ kill -TERM 1388787 00:04:18.805 05:24:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.805 05:24:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:18.805 05:24:12 -- pm/common@44 -- $ pid=1388816 00:04:18.805 05:24:12 -- pm/common@50 -- $ sudo -E kill -TERM 1388816 00:04:19.062 05:24:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:19.062 05:24:12 -- nvmf/common.sh@7 -- # uname -s 00:04:19.062 05:24:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.062 05:24:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.062 05:24:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.062 05:24:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.062 05:24:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.062 05:24:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.062 05:24:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.062 05:24:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.062 05:24:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.062 05:24:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.062 05:24:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:19.062 05:24:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:19.062 05:24:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.062 05:24:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.062 05:24:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:19.062 05:24:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.062 05:24:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:19.062 05:24:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.062 05:24:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.062 05:24:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.062 05:24:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.062 05:24:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.062 05:24:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.062 05:24:12 -- paths/export.sh@5 -- # export PATH 00:04:19.063 05:24:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.063 05:24:12 -- nvmf/common.sh@47 -- # : 0 00:04:19.063 05:24:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:19.063 05:24:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:19.063 05:24:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.063 05:24:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.063 05:24:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.063 05:24:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:19.063 05:24:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:19.063 05:24:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:19.063 05:24:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:19.063 05:24:12 -- spdk/autotest.sh@32 -- # uname -s 00:04:19.063 05:24:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:19.063 05:24:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:19.063 05:24:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:19.063 05:24:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:19.063 05:24:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:19.063 05:24:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:19.063 05:24:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:19.063 05:24:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:19.063 05:24:12 -- spdk/autotest.sh@48 -- # udevadm_pid=1465087 00:04:19.063 05:24:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:19.063 05:24:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:19.063 05:24:12 -- pm/common@17 -- # local monitor 00:04:19.063 05:24:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.063 05:24:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.063 05:24:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.063 05:24:12 -- pm/common@21 -- # date +%s 00:04:19.063 05:24:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.063 05:24:12 -- pm/common@21 -- # date +%s 00:04:19.063 05:24:12 -- pm/common@25 -- # sleep 1 00:04:19.063 05:24:12 -- pm/common@21 -- # date +%s 00:04:19.063 05:24:12 -- pm/common@21 -- # date +%s 00:04:19.063 05:24:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721877852 00:04:19.063 05:24:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721877852 00:04:19.063 05:24:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721877852 00:04:19.063 05:24:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721877852 00:04:19.063 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721877852_collect-vmstat.pm.log 00:04:19.063 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721877852_collect-cpu-load.pm.log 00:04:19.063 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721877852_collect-cpu-temp.pm.log 00:04:19.063 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721877852_collect-bmc-pm.bmc.pm.log 00:04:19.995 05:24:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:19.995 05:24:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:19.995 05:24:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:19.995 05:24:13 -- common/autotest_common.sh@10 -- # set +x 00:04:19.995 05:24:13 -- spdk/autotest.sh@59 -- # create_test_list 00:04:19.995 05:24:13 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:19.995 05:24:13 -- common/autotest_common.sh@10 -- # set +x 00:04:19.995 05:24:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:19.995 05:24:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.995 05:24:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.995 05:24:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:19.995 05:24:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.995 05:24:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:19.995 05:24:13 -- common/autotest_common.sh@1455 -- # uname 00:04:19.995 05:24:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:19.995 05:24:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:19.995 05:24:13 -- common/autotest_common.sh@1475 -- # uname 00:04:19.995 05:24:13 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:19.995 05:24:13 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:19.995 05:24:13 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:19.995 05:24:13 -- spdk/autotest.sh@72 -- # hash lcov 00:04:19.995 05:24:13 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:19.995 05:24:13 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:19.995 --rc lcov_branch_coverage=1 00:04:19.995 --rc lcov_function_coverage=1 00:04:19.995 --rc genhtml_branch_coverage=1 00:04:19.995 --rc genhtml_function_coverage=1 00:04:19.995 --rc genhtml_legend=1 00:04:19.995 --rc geninfo_all_blocks=1 00:04:19.995 ' 00:04:19.995 05:24:13 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:19.995 --rc lcov_branch_coverage=1 00:04:19.995 --rc lcov_function_coverage=1 00:04:19.995 --rc genhtml_branch_coverage=1 00:04:19.995 --rc genhtml_function_coverage=1 00:04:19.995 --rc genhtml_legend=1 00:04:19.995 --rc geninfo_all_blocks=1 00:04:19.995 ' 00:04:19.995 05:24:13 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:19.995 --rc lcov_branch_coverage=1 00:04:19.995 --rc lcov_function_coverage=1 00:04:19.995 --rc genhtml_branch_coverage=1 00:04:19.995 --rc genhtml_function_coverage=1 00:04:19.995 --rc genhtml_legend=1 00:04:19.995 --rc geninfo_all_blocks=1 00:04:19.995 --no-external' 00:04:19.995 05:24:13 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:19.995 --rc lcov_branch_coverage=1 00:04:19.995 --rc lcov_function_coverage=1 00:04:19.995 --rc genhtml_branch_coverage=1 00:04:19.995 --rc genhtml_function_coverage=1 00:04:19.995 --rc genhtml_legend=1 00:04:19.995 --rc geninfo_all_blocks=1 00:04:19.995 --no-external' 00:04:19.995 05:24:13 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:19.995 lcov: LCOV version 1.14 00:04:19.995 05:24:13 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:46.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:46.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:49.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:54.030 05:24:46 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:54.030 05:24:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.030 05:24:46 -- common/autotest_common.sh@10 -- # set +x 00:04:54.030 05:24:46 -- spdk/autotest.sh@91 -- # rm -f 00:04:54.030 05:24:46 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.596 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:54.596 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:54.596 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:54.596 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:54.596 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:54.596 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:54.596 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:54.596 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:54.596 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:54.596 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:54.596 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:54.596 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:54.596 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:54.596 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:54.596 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:54.596 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:54.596 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:54.854 05:24:48 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:54.854 05:24:48 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:54.854 05:24:48 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:54.854 05:24:48 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:54.854 05:24:48 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:54.854 05:24:48 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:54.854 05:24:48 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:54.854 05:24:48 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.854 05:24:48 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:54.854 05:24:48 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:54.854 05:24:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.854 05:24:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:54.854 05:24:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:54.854 05:24:48 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:54.854 05:24:48 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:54.854 No valid GPT data, bailing 00:04:54.854 05:24:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:54.854 05:24:48 -- scripts/common.sh@391 -- # pt= 00:04:54.854 05:24:48 -- scripts/common.sh@392 -- # return 1 00:04:54.854 05:24:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:54.854 1+0 records in 00:04:54.854 1+0 records out 00:04:54.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00160179 s, 655 MB/s 00:04:54.854 05:24:48 -- spdk/autotest.sh@118 -- # sync 00:04:54.854 05:24:48 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:54.854 05:24:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:54.854 05:24:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:56.753 05:24:50 -- spdk/autotest.sh@124 -- # uname -s 00:04:56.753 05:24:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:56.753 05:24:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:56.753 05:24:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.753 05:24:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.753 05:24:50 -- common/autotest_common.sh@10 -- # set +x 00:04:56.753 ************************************ 00:04:56.753 START TEST setup.sh 00:04:56.753 ************************************ 00:04:56.753 05:24:50 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:56.753 * Looking for test storage... 00:04:56.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:56.753 05:24:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:56.753 05:24:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:56.753 05:24:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:56.753 05:24:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.753 05:24:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.753 05:24:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:56.753 ************************************ 00:04:56.753 START TEST acl 00:04:56.753 ************************************ 00:04:56.753 05:24:50 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:56.753 * Looking for test storage... 00:04:56.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:56.754 05:24:50 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:56.754 05:24:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:56.754 05:24:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:56.754 05:24:50 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:56.754 05:24:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:56.754 05:24:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:56.754 05:24:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:56.754 05:24:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:56.754 05:24:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:56.754 05:24:50 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:56.754 05:24:50 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:56.754 05:24:50 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:56.754 05:24:50 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:56.754 05:24:50 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:56.754 05:24:50 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:56.754 05:24:50 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:58.127 05:24:51 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:58.127 05:24:51 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:58.127 05:24:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:58.127 05:24:51 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:58.127 05:24:51 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.127 05:24:51 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:59.503 Hugepages 00:04:59.503 node hugesize free / total 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 00:04:59.503 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:59.503 05:24:52 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:59.503 05:24:52 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.503 05:24:52 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.503 05:24:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:59.503 ************************************ 00:04:59.503 START TEST denied 00:04:59.503 ************************************ 00:04:59.503 05:24:52 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:59.503 05:24:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:59.503 05:24:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:59.503 05:24:52 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:59.503 05:24:52 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.503 05:24:52 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:00.880 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:05:00.880 05:24:54 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:05:00.880 05:24:54 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:00.880 05:24:54 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:00.880 05:24:54 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:05:00.880 05:24:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:05:00.880 05:24:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:00.880 05:24:54 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:00.880 05:24:54 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:00.880 05:24:54 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:00.880 05:24:54 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:03.425 00:05:03.425 real 0m3.865s 00:05:03.425 user 0m1.143s 00:05:03.425 sys 0m1.825s 00:05:03.425 05:24:56 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.425 05:24:56 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:03.425 ************************************ 00:05:03.425 END TEST denied 00:05:03.425 ************************************ 00:05:03.425 05:24:56 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:03.425 05:24:56 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.425 05:24:56 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.425 05:24:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:03.425 ************************************ 00:05:03.425 START TEST allowed 00:05:03.425 ************************************ 00:05:03.425 05:24:56 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:05:03.425 05:24:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:05:03.425 05:24:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:03.425 05:24:56 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:05:03.425 05:24:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.425 05:24:56 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:05.952 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:05.952 05:24:59 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:05.952 05:24:59 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:05.952 05:24:59 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:05.952 05:24:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.952 05:24:59 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.326 00:05:07.326 real 0m3.808s 00:05:07.326 user 0m1.017s 00:05:07.326 sys 0m1.629s 00:05:07.326 05:25:00 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.326 05:25:00 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:07.326 ************************************ 00:05:07.326 END TEST allowed 00:05:07.326 ************************************ 00:05:07.326 00:05:07.326 real 0m10.441s 00:05:07.326 user 0m3.227s 00:05:07.326 sys 0m5.222s 00:05:07.326 05:25:00 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.326 05:25:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:07.326 ************************************ 00:05:07.326 END TEST acl 00:05:07.326 ************************************ 00:05:07.326 05:25:00 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:07.326 05:25:00 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.326 05:25:00 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.326 05:25:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:07.326 ************************************ 00:05:07.326 START TEST hugepages 00:05:07.326 ************************************ 00:05:07.326 05:25:00 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:07.326 * Looking for test storage... 00:05:07.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 41549080 kB' 'MemAvailable: 45059880 kB' 'Buffers: 2704 kB' 'Cached: 12400244 kB' 'SwapCached: 0 kB' 'Active: 9393808 kB' 'Inactive: 3508308 kB' 'Active(anon): 8997176 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502568 kB' 'Mapped: 210908 kB' 'Shmem: 8498008 kB' 'KReclaimable: 199400 kB' 'Slab: 580164 kB' 'SReclaimable: 199400 kB' 'SUnreclaim: 380764 kB' 'KernelStack: 13072 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 10117200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.326 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.327 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:07.328 05:25:00 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:07.328 05:25:00 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.328 05:25:00 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.328 05:25:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.328 ************************************ 00:05:07.328 START TEST default_setup 00:05:07.328 ************************************ 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.328 05:25:00 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.700 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:08.700 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:08.700 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:08.700 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:08.700 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:08.700 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:08.700 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:08.700 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:08.700 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:08.700 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:08.700 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:08.700 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:08.700 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:08.700 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:08.700 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:08.700 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:09.639 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43655980 kB' 'MemAvailable: 47166688 kB' 'Buffers: 2704 kB' 'Cached: 12400332 kB' 'SwapCached: 0 kB' 'Active: 9407792 kB' 'Inactive: 3508308 kB' 'Active(anon): 9011160 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516284 kB' 'Mapped: 210140 kB' 'Shmem: 8498096 kB' 'KReclaimable: 199216 kB' 'Slab: 579468 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 380252 kB' 'KernelStack: 12784 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10133304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.639 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.640 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43658160 kB' 'MemAvailable: 47168868 kB' 'Buffers: 2704 kB' 'Cached: 12400332 kB' 'SwapCached: 0 kB' 'Active: 9407816 kB' 'Inactive: 3508308 kB' 'Active(anon): 9011184 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516368 kB' 'Mapped: 210096 kB' 'Shmem: 8498096 kB' 'KReclaimable: 199216 kB' 'Slab: 579460 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 380244 kB' 'KernelStack: 12848 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10133324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.641 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.642 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43659272 kB' 'MemAvailable: 47169980 kB' 'Buffers: 2704 kB' 'Cached: 12400332 kB' 'SwapCached: 0 kB' 'Active: 9407540 kB' 'Inactive: 3508308 kB' 'Active(anon): 9010908 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516052 kB' 'Mapped: 210016 kB' 'Shmem: 8498096 kB' 'KReclaimable: 199216 kB' 'Slab: 579440 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 380224 kB' 'KernelStack: 12816 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10133344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.643 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.644 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:09.645 nr_hugepages=1024 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.645 resv_hugepages=0 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.645 surplus_hugepages=0 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.645 anon_hugepages=0 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43658012 kB' 'MemAvailable: 47168720 kB' 'Buffers: 2704 kB' 'Cached: 12400376 kB' 'SwapCached: 0 kB' 'Active: 9407620 kB' 'Inactive: 3508308 kB' 'Active(anon): 9010988 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516084 kB' 'Mapped: 210016 kB' 'Shmem: 8498140 kB' 'KReclaimable: 199216 kB' 'Slab: 579440 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 380224 kB' 'KernelStack: 12816 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10133368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.645 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.646 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 18535808 kB' 'MemUsed: 14341132 kB' 'SwapCached: 0 kB' 'Active: 7904492 kB' 'Inactive: 3261736 kB' 'Active(anon): 7691300 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10834980 kB' 'Mapped: 128412 kB' 'AnonPages: 334384 kB' 'Shmem: 7360052 kB' 'KernelStack: 7816 kB' 'PageTables: 4884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130408 kB' 'Slab: 370324 kB' 'SReclaimable: 130408 kB' 'SUnreclaim: 239916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.647 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.648 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:09.649 node0=1024 expecting 1024 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:09.649 00:05:09.649 real 0m2.448s 00:05:09.649 user 0m0.649s 00:05:09.649 sys 0m0.888s 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.649 05:25:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:09.649 ************************************ 00:05:09.649 END TEST default_setup 00:05:09.649 ************************************ 00:05:09.649 05:25:03 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:09.649 05:25:03 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.649 05:25:03 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.649 05:25:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.906 ************************************ 00:05:09.906 START TEST per_node_1G_alloc 00:05:09.906 ************************************ 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.906 05:25:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.840 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.840 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:10.840 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.840 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.840 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.840 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.840 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.840 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.840 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:10.840 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.840 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.840 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.840 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.840 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.840 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.840 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.840 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43664208 kB' 'MemAvailable: 47174916 kB' 'Buffers: 2704 kB' 'Cached: 12400444 kB' 'SwapCached: 0 kB' 'Active: 9408324 kB' 'Inactive: 3508308 kB' 'Active(anon): 9011692 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516688 kB' 'Mapped: 210196 kB' 'Shmem: 8498208 kB' 'KReclaimable: 199216 kB' 'Slab: 579264 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 380048 kB' 'KernelStack: 12848 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10133548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:10.840 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43663792 kB' 'MemAvailable: 47174500 kB' 'Buffers: 2704 kB' 'Cached: 12400448 kB' 'SwapCached: 0 kB' 'Active: 9407748 kB' 'Inactive: 3508308 kB' 'Active(anon): 9011116 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516080 kB' 'Mapped: 210108 kB' 'Shmem: 8498212 kB' 'KReclaimable: 199216 kB' 'Slab: 579240 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 380024 kB' 'KernelStack: 12800 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10133568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43663540 kB' 'MemAvailable: 47174248 kB' 'Buffers: 2704 kB' 'Cached: 12400464 kB' 'SwapCached: 0 kB' 'Active: 9407856 kB' 'Inactive: 3508308 kB' 'Active(anon): 9011224 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516200 kB' 'Mapped: 210028 kB' 'Shmem: 8498228 kB' 'KReclaimable: 199216 kB' 'Slab: 579272 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 380056 kB' 'KernelStack: 12848 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10133592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.107 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.108 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:11.109 nr_hugepages=1024 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.109 resv_hugepages=0 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.109 surplus_hugepages=0 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.109 anon_hugepages=0 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.109 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43663036 kB' 'MemAvailable: 47173744 kB' 'Buffers: 2704 kB' 'Cached: 12400488 kB' 'SwapCached: 0 kB' 'Active: 9407912 kB' 'Inactive: 3508308 kB' 'Active(anon): 9011280 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516200 kB' 'Mapped: 210028 kB' 'Shmem: 8498252 kB' 'KReclaimable: 199216 kB' 'Slab: 579272 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 380056 kB' 'KernelStack: 12848 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10133612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.110 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.111 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19595860 kB' 'MemUsed: 13281080 kB' 'SwapCached: 0 kB' 'Active: 7905268 kB' 'Inactive: 3261736 kB' 'Active(anon): 7692076 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10835056 kB' 'Mapped: 128424 kB' 'AnonPages: 335076 kB' 'Shmem: 7360128 kB' 'KernelStack: 7864 kB' 'PageTables: 4980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130408 kB' 'Slab: 370236 kB' 'SReclaimable: 130408 kB' 'SUnreclaim: 239828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.112 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24072712 kB' 'MemUsed: 3592076 kB' 'SwapCached: 0 kB' 'Active: 1502664 kB' 'Inactive: 246572 kB' 'Active(anon): 1319224 kB' 'Inactive(anon): 0 kB' 'Active(file): 183440 kB' 'Inactive(file): 246572 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1568160 kB' 'Mapped: 81604 kB' 'AnonPages: 181116 kB' 'Shmem: 1138148 kB' 'KernelStack: 4984 kB' 'PageTables: 3268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68808 kB' 'Slab: 209036 kB' 'SReclaimable: 68808 kB' 'SUnreclaim: 140228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.113 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:11.114 node0=512 expecting 512 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:11.114 node1=512 expecting 512 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:11.114 00:05:11.114 real 0m1.349s 00:05:11.114 user 0m0.585s 00:05:11.114 sys 0m0.723s 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.114 05:25:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:11.114 ************************************ 00:05:11.114 END TEST per_node_1G_alloc 00:05:11.115 ************************************ 00:05:11.115 05:25:04 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:11.115 05:25:04 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.115 05:25:04 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.115 05:25:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:11.115 ************************************ 00:05:11.115 START TEST even_2G_alloc 00:05:11.115 ************************************ 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.115 05:25:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.495 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:12.495 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:12.495 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:12.495 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:12.495 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:12.495 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:12.495 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:12.495 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:12.495 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:12.495 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:12.495 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:12.495 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:12.495 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:12.495 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:12.495 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:12.495 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:12.495 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.495 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43678008 kB' 'MemAvailable: 47188716 kB' 'Buffers: 2704 kB' 'Cached: 12400584 kB' 'SwapCached: 0 kB' 'Active: 9407972 kB' 'Inactive: 3508308 kB' 'Active(anon): 9011340 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516192 kB' 'Mapped: 210140 kB' 'Shmem: 8498348 kB' 'KReclaimable: 199216 kB' 'Slab: 579184 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 379968 kB' 'KernelStack: 12784 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10133828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.496 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.497 05:25:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43677872 kB' 'MemAvailable: 47188580 kB' 'Buffers: 2704 kB' 'Cached: 12400592 kB' 'SwapCached: 0 kB' 'Active: 9408244 kB' 'Inactive: 3508308 kB' 'Active(anon): 9011612 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516508 kB' 'Mapped: 210120 kB' 'Shmem: 8498356 kB' 'KReclaimable: 199216 kB' 'Slab: 579200 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 379984 kB' 'KernelStack: 12864 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10133844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.497 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.498 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43677772 kB' 'MemAvailable: 47188480 kB' 'Buffers: 2704 kB' 'Cached: 12400608 kB' 'SwapCached: 0 kB' 'Active: 9408132 kB' 'Inactive: 3508308 kB' 'Active(anon): 9011500 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516348 kB' 'Mapped: 210044 kB' 'Shmem: 8498372 kB' 'KReclaimable: 199216 kB' 'Slab: 579208 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 379992 kB' 'KernelStack: 12880 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10133864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.499 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.500 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.501 nr_hugepages=1024 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.501 resv_hugepages=0 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.501 surplus_hugepages=0 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.501 anon_hugepages=0 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43678200 kB' 'MemAvailable: 47188908 kB' 'Buffers: 2704 kB' 'Cached: 12400632 kB' 'SwapCached: 0 kB' 'Active: 9407932 kB' 'Inactive: 3508308 kB' 'Active(anon): 9011300 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516116 kB' 'Mapped: 210044 kB' 'Shmem: 8498396 kB' 'KReclaimable: 199216 kB' 'Slab: 579208 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 379992 kB' 'KernelStack: 12864 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10133888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.501 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.502 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19598456 kB' 'MemUsed: 13278484 kB' 'SwapCached: 0 kB' 'Active: 7906124 kB' 'Inactive: 3261736 kB' 'Active(anon): 7692932 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10835192 kB' 'Mapped: 128436 kB' 'AnonPages: 335888 kB' 'Shmem: 7360264 kB' 'KernelStack: 7896 kB' 'PageTables: 5028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130408 kB' 'Slab: 370188 kB' 'SReclaimable: 130408 kB' 'SUnreclaim: 239780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.503 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.504 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24079744 kB' 'MemUsed: 3585044 kB' 'SwapCached: 0 kB' 'Active: 1502052 kB' 'Inactive: 246572 kB' 'Active(anon): 1318612 kB' 'Inactive(anon): 0 kB' 'Active(file): 183440 kB' 'Inactive(file): 246572 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1568164 kB' 'Mapped: 81608 kB' 'AnonPages: 180464 kB' 'Shmem: 1138152 kB' 'KernelStack: 4984 kB' 'PageTables: 3224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68808 kB' 'Slab: 209012 kB' 'SReclaimable: 68808 kB' 'SUnreclaim: 140204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.505 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:12.506 node0=512 expecting 512 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:12.506 node1=512 expecting 512 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:12.506 00:05:12.506 real 0m1.394s 00:05:12.506 user 0m0.610s 00:05:12.506 sys 0m0.748s 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.506 05:25:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:12.506 ************************************ 00:05:12.506 END TEST even_2G_alloc 00:05:12.506 ************************************ 00:05:12.506 05:25:06 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:12.506 05:25:06 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.506 05:25:06 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.506 05:25:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:12.506 ************************************ 00:05:12.506 START TEST odd_alloc 00:05:12.506 ************************************ 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.506 05:25:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:13.886 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:13.886 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:13.886 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:13.886 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:13.886 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:13.886 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:13.886 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:13.886 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:13.886 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:13.886 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:13.886 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:13.886 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:13.886 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:13.886 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:13.886 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:13.886 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:13.886 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43673728 kB' 'MemAvailable: 47184436 kB' 'Buffers: 2704 kB' 'Cached: 12400716 kB' 'SwapCached: 0 kB' 'Active: 9405176 kB' 'Inactive: 3508308 kB' 'Active(anon): 9008544 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513256 kB' 'Mapped: 209368 kB' 'Shmem: 8498480 kB' 'KReclaimable: 199216 kB' 'Slab: 579020 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 379804 kB' 'KernelStack: 12800 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 10120292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.887 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43674896 kB' 'MemAvailable: 47185604 kB' 'Buffers: 2704 kB' 'Cached: 12400720 kB' 'SwapCached: 0 kB' 'Active: 9405468 kB' 'Inactive: 3508308 kB' 'Active(anon): 9008836 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513572 kB' 'Mapped: 209308 kB' 'Shmem: 8498484 kB' 'KReclaimable: 199216 kB' 'Slab: 578980 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 379764 kB' 'KernelStack: 12832 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 10120308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43676276 kB' 'MemAvailable: 47186984 kB' 'Buffers: 2704 kB' 'Cached: 12400732 kB' 'SwapCached: 0 kB' 'Active: 9404968 kB' 'Inactive: 3508308 kB' 'Active(anon): 9008336 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512992 kB' 'Mapped: 209228 kB' 'Shmem: 8498496 kB' 'KReclaimable: 199216 kB' 'Slab: 578980 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 379764 kB' 'KernelStack: 12816 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 10120328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.890 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.891 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:13.892 nr_hugepages=1025 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.892 resv_hugepages=0 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.892 surplus_hugepages=0 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.892 anon_hugepages=0 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43677644 kB' 'MemAvailable: 47188352 kB' 'Buffers: 2704 kB' 'Cached: 12400756 kB' 'SwapCached: 0 kB' 'Active: 9405016 kB' 'Inactive: 3508308 kB' 'Active(anon): 9008384 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513056 kB' 'Mapped: 209228 kB' 'Shmem: 8498520 kB' 'KReclaimable: 199216 kB' 'Slab: 578980 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 379764 kB' 'KernelStack: 12848 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 10120348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.892 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.893 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19603032 kB' 'MemUsed: 13273908 kB' 'SwapCached: 0 kB' 'Active: 7904616 kB' 'Inactive: 3261736 kB' 'Active(anon): 7691424 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10835316 kB' 'Mapped: 127680 kB' 'AnonPages: 334212 kB' 'Shmem: 7360388 kB' 'KernelStack: 7896 kB' 'PageTables: 4856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130408 kB' 'Slab: 370124 kB' 'SReclaimable: 130408 kB' 'SUnreclaim: 239716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.894 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.895 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24074900 kB' 'MemUsed: 3589888 kB' 'SwapCached: 0 kB' 'Active: 1500412 kB' 'Inactive: 246572 kB' 'Active(anon): 1316972 kB' 'Inactive(anon): 0 kB' 'Active(file): 183440 kB' 'Inactive(file): 246572 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1568168 kB' 'Mapped: 81548 kB' 'AnonPages: 178840 kB' 'Shmem: 1138156 kB' 'KernelStack: 4952 kB' 'PageTables: 3056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68808 kB' 'Slab: 208856 kB' 'SReclaimable: 68808 kB' 'SUnreclaim: 140048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.896 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.897 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:13.899 node0=512 expecting 513 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:13.899 node1=513 expecting 512 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:13.899 00:05:13.899 real 0m1.326s 00:05:13.899 user 0m0.568s 00:05:13.899 sys 0m0.717s 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.899 05:25:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:13.899 ************************************ 00:05:13.899 END TEST odd_alloc 00:05:13.899 ************************************ 00:05:13.899 05:25:07 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:13.899 05:25:07 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.899 05:25:07 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.900 05:25:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:13.900 ************************************ 00:05:13.900 START TEST custom_alloc 00:05:13.900 ************************************ 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.900 05:25:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:15.279 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:15.279 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:15.279 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:15.279 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:15.279 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:15.279 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:15.279 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:15.279 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:15.279 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:15.279 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:15.279 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:15.279 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:15.279 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:15.279 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:15.279 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:15.279 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:15.279 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.279 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 42615864 kB' 'MemAvailable: 46126572 kB' 'Buffers: 2704 kB' 'Cached: 12400848 kB' 'SwapCached: 0 kB' 'Active: 9410500 kB' 'Inactive: 3508308 kB' 'Active(anon): 9013868 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518484 kB' 'Mapped: 209804 kB' 'Shmem: 8498612 kB' 'KReclaimable: 199216 kB' 'Slab: 579436 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 380220 kB' 'KernelStack: 12864 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 10126668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196568 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.280 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 42620936 kB' 'MemAvailable: 46131644 kB' 'Buffers: 2704 kB' 'Cached: 12400852 kB' 'SwapCached: 0 kB' 'Active: 9410348 kB' 'Inactive: 3508308 kB' 'Active(anon): 9013716 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518328 kB' 'Mapped: 209748 kB' 'Shmem: 8498616 kB' 'KReclaimable: 199216 kB' 'Slab: 579396 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 380180 kB' 'KernelStack: 12832 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 10126688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196536 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.281 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.282 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 42620888 kB' 'MemAvailable: 46131596 kB' 'Buffers: 2704 kB' 'Cached: 12400868 kB' 'SwapCached: 0 kB' 'Active: 9404924 kB' 'Inactive: 3508308 kB' 'Active(anon): 9008292 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512876 kB' 'Mapped: 209232 kB' 'Shmem: 8498632 kB' 'KReclaimable: 199216 kB' 'Slab: 579388 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 380172 kB' 'KernelStack: 12800 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 10120588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.283 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.284 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:15.285 nr_hugepages=1536 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.285 resv_hugepages=0 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.285 surplus_hugepages=0 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.285 anon_hugepages=0 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 42620636 kB' 'MemAvailable: 46131344 kB' 'Buffers: 2704 kB' 'Cached: 12400888 kB' 'SwapCached: 0 kB' 'Active: 9405304 kB' 'Inactive: 3508308 kB' 'Active(anon): 9008672 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513212 kB' 'Mapped: 209232 kB' 'Shmem: 8498652 kB' 'KReclaimable: 199216 kB' 'Slab: 579388 kB' 'SReclaimable: 199216 kB' 'SUnreclaim: 380172 kB' 'KernelStack: 12848 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 10120608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.285 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.286 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19596468 kB' 'MemUsed: 13280472 kB' 'SwapCached: 0 kB' 'Active: 7904696 kB' 'Inactive: 3261736 kB' 'Active(anon): 7691504 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10835368 kB' 'Mapped: 127684 kB' 'AnonPages: 334224 kB' 'Shmem: 7360440 kB' 'KernelStack: 7880 kB' 'PageTables: 4860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130408 kB' 'Slab: 370488 kB' 'SReclaimable: 130408 kB' 'SUnreclaim: 240080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.287 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.288 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.547 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.547 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.547 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.547 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.547 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.547 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.547 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.547 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.547 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.548 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 23022152 kB' 'MemUsed: 4642636 kB' 'SwapCached: 0 kB' 'Active: 1500356 kB' 'Inactive: 246572 kB' 'Active(anon): 1316916 kB' 'Inactive(anon): 0 kB' 'Active(file): 183440 kB' 'Inactive(file): 246572 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1568268 kB' 'Mapped: 81548 kB' 'AnonPages: 178704 kB' 'Shmem: 1138256 kB' 'KernelStack: 4936 kB' 'PageTables: 3008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68808 kB' 'Slab: 208900 kB' 'SReclaimable: 68808 kB' 'SUnreclaim: 140092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.549 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:15.550 node0=512 expecting 512 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:15.550 node1=1024 expecting 1024 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:15.550 00:05:15.550 real 0m1.468s 00:05:15.550 user 0m0.654s 00:05:15.550 sys 0m0.780s 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.550 05:25:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:15.550 ************************************ 00:05:15.550 END TEST custom_alloc 00:05:15.550 ************************************ 00:05:15.550 05:25:09 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:15.550 05:25:09 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.550 05:25:09 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.550 05:25:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:15.550 ************************************ 00:05:15.550 START TEST no_shrink_alloc 00:05:15.550 ************************************ 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.550 05:25:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:16.484 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:16.484 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:16.484 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:16.484 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:16.484 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:16.484 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:16.484 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:16.484 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:16.484 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:16.484 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:16.484 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:16.484 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:16.484 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:16.484 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:16.484 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:16.484 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:16.484 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43656168 kB' 'MemAvailable: 47166928 kB' 'Buffers: 2704 kB' 'Cached: 12400976 kB' 'SwapCached: 0 kB' 'Active: 9406472 kB' 'Inactive: 3508308 kB' 'Active(anon): 9009840 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514668 kB' 'Mapped: 209276 kB' 'Shmem: 8498740 kB' 'KReclaimable: 199320 kB' 'Slab: 579312 kB' 'SReclaimable: 199320 kB' 'SUnreclaim: 379992 kB' 'KernelStack: 12880 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10120808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.748 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.749 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43656116 kB' 'MemAvailable: 47166872 kB' 'Buffers: 2704 kB' 'Cached: 12400980 kB' 'SwapCached: 0 kB' 'Active: 9406040 kB' 'Inactive: 3508308 kB' 'Active(anon): 9009408 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514104 kB' 'Mapped: 209328 kB' 'Shmem: 8498744 kB' 'KReclaimable: 199312 kB' 'Slab: 579272 kB' 'SReclaimable: 199312 kB' 'SUnreclaim: 379960 kB' 'KernelStack: 12864 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10120824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196388 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.750 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.751 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43656528 kB' 'MemAvailable: 47167284 kB' 'Buffers: 2704 kB' 'Cached: 12401016 kB' 'SwapCached: 0 kB' 'Active: 9406028 kB' 'Inactive: 3508308 kB' 'Active(anon): 9009396 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513912 kB' 'Mapped: 209252 kB' 'Shmem: 8498780 kB' 'KReclaimable: 199312 kB' 'Slab: 579268 kB' 'SReclaimable: 199312 kB' 'SUnreclaim: 379956 kB' 'KernelStack: 12864 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10127072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.752 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:16.753 nr_hugepages=1024 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.753 resv_hugepages=0 00:05:16.753 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.754 surplus_hugepages=0 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.754 anon_hugepages=0 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43656152 kB' 'MemAvailable: 47166908 kB' 'Buffers: 2704 kB' 'Cached: 12401020 kB' 'SwapCached: 0 kB' 'Active: 9405860 kB' 'Inactive: 3508308 kB' 'Active(anon): 9009228 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513772 kB' 'Mapped: 209252 kB' 'Shmem: 8498784 kB' 'KReclaimable: 199312 kB' 'Slab: 579268 kB' 'SReclaimable: 199312 kB' 'SUnreclaim: 379956 kB' 'KernelStack: 12864 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10121240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196436 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.754 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.755 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 18544112 kB' 'MemUsed: 14332828 kB' 'SwapCached: 0 kB' 'Active: 7905096 kB' 'Inactive: 3261736 kB' 'Active(anon): 7691904 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10835412 kB' 'Mapped: 127700 kB' 'AnonPages: 334572 kB' 'Shmem: 7360484 kB' 'KernelStack: 7880 kB' 'PageTables: 4716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130504 kB' 'Slab: 370436 kB' 'SReclaimable: 130504 kB' 'SUnreclaim: 239932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.756 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:16.757 node0=1024 expecting 1024 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.757 05:25:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:18.134 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:18.134 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:18.134 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:18.134 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:18.134 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:18.134 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:18.134 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:18.134 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:18.134 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:18.134 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:18.134 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:18.134 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:18.134 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:18.134 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:18.134 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:18.134 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:18.134 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:18.134 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43664096 kB' 'MemAvailable: 47174852 kB' 'Buffers: 2704 kB' 'Cached: 12401088 kB' 'SwapCached: 0 kB' 'Active: 9406412 kB' 'Inactive: 3508308 kB' 'Active(anon): 9009780 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514096 kB' 'Mapped: 209280 kB' 'Shmem: 8498852 kB' 'KReclaimable: 199312 kB' 'Slab: 579252 kB' 'SReclaimable: 199312 kB' 'SUnreclaim: 379940 kB' 'KernelStack: 12912 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10121416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.134 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.135 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43664096 kB' 'MemAvailable: 47174852 kB' 'Buffers: 2704 kB' 'Cached: 12401088 kB' 'SwapCached: 0 kB' 'Active: 9406456 kB' 'Inactive: 3508308 kB' 'Active(anon): 9009824 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514144 kB' 'Mapped: 209260 kB' 'Shmem: 8498852 kB' 'KReclaimable: 199312 kB' 'Slab: 579228 kB' 'SReclaimable: 199312 kB' 'SUnreclaim: 379916 kB' 'KernelStack: 12880 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10121432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.136 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.137 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43664472 kB' 'MemAvailable: 47175228 kB' 'Buffers: 2704 kB' 'Cached: 12401112 kB' 'SwapCached: 0 kB' 'Active: 9406484 kB' 'Inactive: 3508308 kB' 'Active(anon): 9009852 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514156 kB' 'Mapped: 209260 kB' 'Shmem: 8498876 kB' 'KReclaimable: 199312 kB' 'Slab: 579284 kB' 'SReclaimable: 199312 kB' 'SUnreclaim: 379972 kB' 'KernelStack: 12912 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10121456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.138 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.139 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:18.140 nr_hugepages=1024 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:18.140 resv_hugepages=0 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:18.140 surplus_hugepages=0 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:18.140 anon_hugepages=0 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43664472 kB' 'MemAvailable: 47175228 kB' 'Buffers: 2704 kB' 'Cached: 12401132 kB' 'SwapCached: 0 kB' 'Active: 9406500 kB' 'Inactive: 3508308 kB' 'Active(anon): 9009868 kB' 'Inactive(anon): 0 kB' 'Active(file): 396632 kB' 'Inactive(file): 3508308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514156 kB' 'Mapped: 209260 kB' 'Shmem: 8498896 kB' 'KReclaimable: 199312 kB' 'Slab: 579284 kB' 'SReclaimable: 199312 kB' 'SUnreclaim: 379972 kB' 'KernelStack: 12912 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10121476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2170460 kB' 'DirectMap2M: 16623616 kB' 'DirectMap1G: 50331648 kB' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.140 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.141 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 18549324 kB' 'MemUsed: 14327616 kB' 'SwapCached: 0 kB' 'Active: 7904860 kB' 'Inactive: 3261736 kB' 'Active(anon): 7691668 kB' 'Inactive(anon): 0 kB' 'Active(file): 213192 kB' 'Inactive(file): 3261736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10835540 kB' 'Mapped: 127708 kB' 'AnonPages: 334204 kB' 'Shmem: 7360612 kB' 'KernelStack: 7912 kB' 'PageTables: 4764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130504 kB' 'Slab: 370412 kB' 'SReclaimable: 130504 kB' 'SUnreclaim: 239908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.142 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.143 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:18.144 node0=1024 expecting 1024 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:18.144 00:05:18.144 real 0m2.722s 00:05:18.144 user 0m1.101s 00:05:18.144 sys 0m1.540s 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.144 05:25:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:18.144 ************************************ 00:05:18.144 END TEST no_shrink_alloc 00:05:18.144 ************************************ 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:18.144 05:25:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:18.144 00:05:18.144 real 0m11.089s 00:05:18.144 user 0m4.334s 00:05:18.144 sys 0m5.633s 00:05:18.144 05:25:11 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.144 05:25:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:18.144 ************************************ 00:05:18.144 END TEST hugepages 00:05:18.144 ************************************ 00:05:18.403 05:25:11 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:18.403 05:25:11 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.403 05:25:11 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.403 05:25:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:18.403 ************************************ 00:05:18.403 START TEST driver 00:05:18.403 ************************************ 00:05:18.403 05:25:11 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:18.403 * Looking for test storage... 00:05:18.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:18.403 05:25:11 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:18.403 05:25:11 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.403 05:25:11 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:20.932 05:25:14 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:20.932 05:25:14 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.932 05:25:14 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.932 05:25:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:20.932 ************************************ 00:05:20.932 START TEST guess_driver 00:05:20.932 ************************************ 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:20.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:20.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:20.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:20.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:20.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:20.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:20.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:20.932 Looking for driver=vfio-pci 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.932 05:25:14 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:21.867 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.867 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.867 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.126 05:25:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.061 05:25:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.061 05:25:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.061 05:25:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.061 05:25:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:23.061 05:25:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:23.061 05:25:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:23.061 05:25:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:25.591 00:05:25.591 real 0m4.782s 00:05:25.591 user 0m1.090s 00:05:25.591 sys 0m1.780s 00:05:25.591 05:25:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.591 05:25:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:25.591 ************************************ 00:05:25.591 END TEST guess_driver 00:05:25.591 ************************************ 00:05:25.591 00:05:25.591 real 0m7.281s 00:05:25.591 user 0m1.623s 00:05:25.591 sys 0m2.742s 00:05:25.591 05:25:19 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.591 05:25:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:25.591 ************************************ 00:05:25.591 END TEST driver 00:05:25.591 ************************************ 00:05:25.591 05:25:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:25.591 05:25:19 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.592 05:25:19 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.592 05:25:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:25.592 ************************************ 00:05:25.592 START TEST devices 00:05:25.592 ************************************ 00:05:25.592 05:25:19 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:25.592 * Looking for test storage... 00:05:25.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:25.592 05:25:19 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:25.592 05:25:19 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:25.592 05:25:19 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:25.592 05:25:19 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:26.965 05:25:20 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:26.965 05:25:20 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:26.965 05:25:20 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:26.965 05:25:20 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:26.965 05:25:20 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:26.965 05:25:20 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:26.965 05:25:20 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:26.965 05:25:20 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:26.965 05:25:20 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:26.965 05:25:20 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:26.965 05:25:20 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:26.965 05:25:20 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:26.965 05:25:20 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:26.965 05:25:20 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:26.965 05:25:20 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:26.965 05:25:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:26.965 05:25:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:26.965 05:25:20 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:26.965 05:25:20 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:26.965 05:25:20 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:26.965 05:25:20 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:26.965 05:25:20 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:26.965 No valid GPT data, bailing 00:05:26.965 05:25:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:26.965 05:25:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:26.965 05:25:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:26.965 05:25:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:26.965 05:25:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:26.965 05:25:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:26.965 05:25:20 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:27.224 05:25:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:27.224 05:25:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:27.224 05:25:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:27.224 05:25:20 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:27.224 05:25:20 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:27.224 05:25:20 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:27.224 05:25:20 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.224 05:25:20 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.224 05:25:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:27.224 ************************************ 00:05:27.224 START TEST nvme_mount 00:05:27.224 ************************************ 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:27.224 05:25:20 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:28.158 Creating new GPT entries in memory. 00:05:28.158 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:28.158 other utilities. 00:05:28.158 05:25:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:28.158 05:25:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.158 05:25:21 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:28.158 05:25:21 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:28.158 05:25:21 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:29.092 Creating new GPT entries in memory. 00:05:29.092 The operation has completed successfully. 00:05:29.092 05:25:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:29.092 05:25:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.092 05:25:22 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1485309 00:05:29.092 05:25:22 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.092 05:25:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:29.092 05:25:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.092 05:25:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:29.092 05:25:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:29.092 05:25:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.351 05:25:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.287 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.288 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.288 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.288 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.288 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.288 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.288 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.288 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.288 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.288 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.288 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.288 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.288 05:25:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.546 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.546 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:30.546 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.546 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:30.546 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:30.546 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:30.546 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.546 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.546 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.546 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:30.546 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:30.546 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:30.546 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:30.803 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:30.803 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:30.803 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:30.803 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.803 05:25:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.174 05:25:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.106 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.365 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.365 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:33.365 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:33.365 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:33.365 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.365 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.365 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.365 05:25:26 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:33.365 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:33.365 00:05:33.365 real 0m6.283s 00:05:33.365 user 0m1.419s 00:05:33.365 sys 0m2.337s 00:05:33.365 05:25:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.365 05:25:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:33.365 ************************************ 00:05:33.365 END TEST nvme_mount 00:05:33.365 ************************************ 00:05:33.365 05:25:26 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:33.365 05:25:26 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.365 05:25:26 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.365 05:25:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:33.365 ************************************ 00:05:33.365 START TEST dm_mount 00:05:33.365 ************************************ 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:33.365 05:25:27 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:34.737 Creating new GPT entries in memory. 00:05:34.737 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:34.737 other utilities. 00:05:34.737 05:25:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:34.737 05:25:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:34.737 05:25:28 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:34.737 05:25:28 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:34.737 05:25:28 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:35.682 Creating new GPT entries in memory. 00:05:35.682 The operation has completed successfully. 00:05:35.682 05:25:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:35.682 05:25:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.682 05:25:29 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:35.682 05:25:29 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:35.682 05:25:29 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:36.615 The operation has completed successfully. 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1487697 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:36.615 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:36.616 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:36.616 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:36.616 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:36.616 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.616 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:36.616 05:25:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:36.616 05:25:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.616 05:25:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:37.548 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.806 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.806 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:37.806 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:37.806 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:37.806 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:37.806 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:37.806 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:37.806 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:37.807 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:37.807 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:37.807 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:37.807 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:37.807 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:37.807 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:37.807 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.807 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:37.807 05:25:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:37.807 05:25:31 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.807 05:25:31 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:38.740 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.999 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:38.999 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:38.999 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:38.999 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:38.999 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.999 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:38.999 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:38.999 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:38.999 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:38.999 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:38.999 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:38.999 05:25:32 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:38.999 00:05:38.999 real 0m5.609s 00:05:38.999 user 0m0.897s 00:05:38.999 sys 0m1.556s 00:05:38.999 05:25:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.999 05:25:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:38.999 ************************************ 00:05:38.999 END TEST dm_mount 00:05:38.999 ************************************ 00:05:38.999 05:25:32 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:38.999 05:25:32 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:38.999 05:25:32 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:38.999 05:25:32 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:38.999 05:25:32 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:38.999 05:25:32 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:38.999 05:25:32 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:39.258 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:39.258 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:39.258 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:39.258 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:39.258 05:25:32 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:39.258 05:25:32 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:39.258 05:25:32 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:39.258 05:25:32 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:39.258 05:25:32 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:39.258 05:25:32 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:39.258 05:25:32 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:39.258 00:05:39.258 real 0m13.742s 00:05:39.258 user 0m2.963s 00:05:39.258 sys 0m4.843s 00:05:39.258 05:25:32 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.258 05:25:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:39.258 ************************************ 00:05:39.258 END TEST devices 00:05:39.258 ************************************ 00:05:39.258 00:05:39.258 real 0m42.792s 00:05:39.258 user 0m12.233s 00:05:39.258 sys 0m18.612s 00:05:39.258 05:25:32 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.258 05:25:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:39.258 ************************************ 00:05:39.258 END TEST setup.sh 00:05:39.258 ************************************ 00:05:39.515 05:25:32 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:40.446 Hugepages 00:05:40.446 node hugesize free / total 00:05:40.446 node0 1048576kB 0 / 0 00:05:40.446 node0 2048kB 2048 / 2048 00:05:40.446 node1 1048576kB 0 / 0 00:05:40.446 node1 2048kB 0 / 0 00:05:40.446 00:05:40.446 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:40.446 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:40.446 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:40.446 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:40.446 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:40.446 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:40.446 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:40.446 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:40.446 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:40.446 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:40.446 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:40.446 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:40.446 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:40.446 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:40.446 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:40.446 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:40.446 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:40.704 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:40.704 05:25:34 -- spdk/autotest.sh@130 -- # uname -s 00:05:40.704 05:25:34 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:40.704 05:25:34 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:40.704 05:25:34 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:42.078 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:42.078 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:42.078 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:42.078 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:42.078 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:42.078 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:42.078 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:42.078 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:42.078 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:42.078 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:42.078 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:42.078 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:42.078 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:42.078 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:42.078 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:42.078 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:43.010 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:43.010 05:25:36 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:43.974 05:25:37 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:43.974 05:25:37 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:43.974 05:25:37 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:43.974 05:25:37 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:43.974 05:25:37 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:43.974 05:25:37 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:43.974 05:25:37 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.974 05:25:37 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:43.974 05:25:37 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:43.974 05:25:37 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:43.974 05:25:37 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:43.974 05:25:37 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:45.350 Waiting for block devices as requested 00:05:45.350 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:45.350 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:45.350 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:45.608 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:45.608 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:45.608 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:45.608 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:45.867 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:45.867 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:45.867 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:45.867 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:46.125 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:46.125 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:46.125 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:46.125 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:46.383 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:46.383 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:46.383 05:25:40 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:46.383 05:25:40 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:46.383 05:25:40 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:46.383 05:25:40 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:46.383 05:25:40 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:46.383 05:25:40 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:46.383 05:25:40 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:46.383 05:25:40 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:46.383 05:25:40 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:46.383 05:25:40 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:46.383 05:25:40 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:46.383 05:25:40 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:46.383 05:25:40 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:46.383 05:25:40 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:46.383 05:25:40 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:46.383 05:25:40 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:46.383 05:25:40 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:46.383 05:25:40 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:46.383 05:25:40 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:46.383 05:25:40 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:46.383 05:25:40 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:46.384 05:25:40 -- common/autotest_common.sh@1557 -- # continue 00:05:46.384 05:25:40 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:46.384 05:25:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.384 05:25:40 -- common/autotest_common.sh@10 -- # set +x 00:05:46.642 05:25:40 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:46.642 05:25:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.642 05:25:40 -- common/autotest_common.sh@10 -- # set +x 00:05:46.642 05:25:40 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:47.581 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:47.581 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:47.581 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:47.581 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:47.581 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:47.581 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:47.581 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:47.845 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:47.845 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:47.845 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:47.845 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:47.845 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:47.845 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:47.845 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:47.845 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:47.845 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:48.780 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:48.780 05:25:42 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:48.780 05:25:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.780 05:25:42 -- common/autotest_common.sh@10 -- # set +x 00:05:48.780 05:25:42 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:48.780 05:25:42 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:48.780 05:25:42 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:48.780 05:25:42 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:48.780 05:25:42 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:48.780 05:25:42 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:48.780 05:25:42 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:48.780 05:25:42 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:48.780 05:25:42 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:48.780 05:25:42 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:48.780 05:25:42 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:49.038 05:25:42 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:49.038 05:25:42 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:49.038 05:25:42 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:49.038 05:25:42 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:49.038 05:25:42 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:49.038 05:25:42 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:49.038 05:25:42 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:49.038 05:25:42 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:49.038 05:25:42 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:49.038 05:25:42 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1492873 00:05:49.038 05:25:42 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.038 05:25:42 -- common/autotest_common.sh@1598 -- # waitforlisten 1492873 00:05:49.038 05:25:42 -- common/autotest_common.sh@831 -- # '[' -z 1492873 ']' 00:05:49.038 05:25:42 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.038 05:25:42 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.038 05:25:42 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.038 05:25:42 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.038 05:25:42 -- common/autotest_common.sh@10 -- # set +x 00:05:49.038 [2024-07-25 05:25:42.570586] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:05:49.038 [2024-07-25 05:25:42.570690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492873 ] 00:05:49.038 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.038 [2024-07-25 05:25:42.633829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.038 [2024-07-25 05:25:42.723260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.297 05:25:42 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.297 05:25:42 -- common/autotest_common.sh@864 -- # return 0 00:05:49.297 05:25:42 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:49.297 05:25:42 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:49.297 05:25:42 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:52.576 nvme0n1 00:05:52.576 05:25:46 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:52.834 [2024-07-25 05:25:46.281220] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:52.834 [2024-07-25 05:25:46.281275] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:52.834 request: 00:05:52.834 { 00:05:52.834 "nvme_ctrlr_name": "nvme0", 00:05:52.834 "password": "test", 00:05:52.834 "method": "bdev_nvme_opal_revert", 00:05:52.834 "req_id": 1 00:05:52.834 } 00:05:52.834 Got JSON-RPC error response 00:05:52.834 response: 00:05:52.834 { 00:05:52.834 "code": -32603, 00:05:52.834 "message": "Internal error" 00:05:52.835 } 00:05:52.835 05:25:46 -- common/autotest_common.sh@1604 -- # true 00:05:52.835 05:25:46 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:52.835 05:25:46 -- common/autotest_common.sh@1608 -- # killprocess 1492873 00:05:52.835 05:25:46 -- common/autotest_common.sh@950 -- # '[' -z 1492873 ']' 00:05:52.835 05:25:46 -- common/autotest_common.sh@954 -- # kill -0 1492873 00:05:52.835 05:25:46 -- common/autotest_common.sh@955 -- # uname 00:05:52.835 05:25:46 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.835 05:25:46 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1492873 00:05:52.835 05:25:46 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.835 05:25:46 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.835 05:25:46 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1492873' 00:05:52.835 killing process with pid 1492873 00:05:52.835 05:25:46 -- common/autotest_common.sh@969 -- # kill 1492873 00:05:52.835 05:25:46 -- common/autotest_common.sh@974 -- # wait 1492873 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.835 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.836 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:54.734 05:25:48 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:54.734 05:25:48 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:54.734 05:25:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:54.734 05:25:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:54.734 05:25:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:54.734 05:25:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:54.734 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:05:54.734 05:25:48 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:54.734 05:25:48 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:54.734 05:25:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.734 05:25:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.734 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:05:54.734 ************************************ 00:05:54.734 START TEST env 00:05:54.734 ************************************ 00:05:54.734 05:25:48 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:54.734 * Looking for test storage... 00:05:54.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:54.734 05:25:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:54.734 05:25:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.734 05:25:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.734 05:25:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.734 ************************************ 00:05:54.734 START TEST env_memory 00:05:54.734 ************************************ 00:05:54.734 05:25:48 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:54.734 00:05:54.734 00:05:54.734 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.734 http://cunit.sourceforge.net/ 00:05:54.734 00:05:54.734 00:05:54.734 Suite: memory 00:05:54.734 Test: alloc and free memory map ...[2024-07-25 05:25:48.237877] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:54.734 passed 00:05:54.734 Test: mem map translation ...[2024-07-25 05:25:48.257718] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:54.734 [2024-07-25 05:25:48.257740] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:54.734 [2024-07-25 05:25:48.257791] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:54.734 [2024-07-25 05:25:48.257807] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:54.734 passed 00:05:54.734 Test: mem map registration ...[2024-07-25 05:25:48.299136] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:54.734 [2024-07-25 05:25:48.299155] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:54.734 passed 00:05:54.734 Test: mem map adjacent registrations ...passed 00:05:54.734 00:05:54.734 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.734 suites 1 1 n/a 0 0 00:05:54.734 tests 4 4 4 0 0 00:05:54.734 asserts 152 152 152 0 n/a 00:05:54.734 00:05:54.734 Elapsed time = 0.142 seconds 00:05:54.734 00:05:54.734 real 0m0.148s 00:05:54.734 user 0m0.141s 00:05:54.734 sys 0m0.007s 00:05:54.734 05:25:48 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.734 05:25:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:54.734 ************************************ 00:05:54.734 END TEST env_memory 00:05:54.734 ************************************ 00:05:54.734 05:25:48 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:54.734 05:25:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.734 05:25:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.734 05:25:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.734 ************************************ 00:05:54.734 START TEST env_vtophys 00:05:54.734 ************************************ 00:05:54.734 05:25:48 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:54.734 EAL: lib.eal log level changed from notice to debug 00:05:54.734 EAL: Detected lcore 0 as core 0 on socket 0 00:05:54.734 EAL: Detected lcore 1 as core 1 on socket 0 00:05:54.734 EAL: Detected lcore 2 as core 2 on socket 0 00:05:54.734 EAL: Detected lcore 3 as core 3 on socket 0 00:05:54.734 EAL: Detected lcore 4 as core 4 on socket 0 00:05:54.734 EAL: Detected lcore 5 as core 5 on socket 0 00:05:54.734 EAL: Detected lcore 6 as core 8 on socket 0 00:05:54.734 EAL: Detected lcore 7 as core 9 on socket 0 00:05:54.734 EAL: Detected lcore 8 as core 10 on socket 0 00:05:54.734 EAL: Detected lcore 9 as core 11 on socket 0 00:05:54.734 EAL: Detected lcore 10 as core 12 on socket 0 00:05:54.734 EAL: Detected lcore 11 as core 13 on socket 0 00:05:54.734 EAL: Detected lcore 12 as core 0 on socket 1 00:05:54.735 EAL: Detected lcore 13 as core 1 on socket 1 00:05:54.735 EAL: Detected lcore 14 as core 2 on socket 1 00:05:54.735 EAL: Detected lcore 15 as core 3 on socket 1 00:05:54.735 EAL: Detected lcore 16 as core 4 on socket 1 00:05:54.735 EAL: Detected lcore 17 as core 5 on socket 1 00:05:54.735 EAL: Detected lcore 18 as core 8 on socket 1 00:05:54.735 EAL: Detected lcore 19 as core 9 on socket 1 00:05:54.735 EAL: Detected lcore 20 as core 10 on socket 1 00:05:54.735 EAL: Detected lcore 21 as core 11 on socket 1 00:05:54.735 EAL: Detected lcore 22 as core 12 on socket 1 00:05:54.735 EAL: Detected lcore 23 as core 13 on socket 1 00:05:54.735 EAL: Detected lcore 24 as core 0 on socket 0 00:05:54.735 EAL: Detected lcore 25 as core 1 on socket 0 00:05:54.735 EAL: Detected lcore 26 as core 2 on socket 0 00:05:54.735 EAL: Detected lcore 27 as core 3 on socket 0 00:05:54.735 EAL: Detected lcore 28 as core 4 on socket 0 00:05:54.735 EAL: Detected lcore 29 as core 5 on socket 0 00:05:54.735 EAL: Detected lcore 30 as core 8 on socket 0 00:05:54.735 EAL: Detected lcore 31 as core 9 on socket 0 00:05:54.735 EAL: Detected lcore 32 as core 10 on socket 0 00:05:54.735 EAL: Detected lcore 33 as core 11 on socket 0 00:05:54.735 EAL: Detected lcore 34 as core 12 on socket 0 00:05:54.735 EAL: Detected lcore 35 as core 13 on socket 0 00:05:54.735 EAL: Detected lcore 36 as core 0 on socket 1 00:05:54.735 EAL: Detected lcore 37 as core 1 on socket 1 00:05:54.735 EAL: Detected lcore 38 as core 2 on socket 1 00:05:54.735 EAL: Detected lcore 39 as core 3 on socket 1 00:05:54.735 EAL: Detected lcore 40 as core 4 on socket 1 00:05:54.735 EAL: Detected lcore 41 as core 5 on socket 1 00:05:54.735 EAL: Detected lcore 42 as core 8 on socket 1 00:05:54.735 EAL: Detected lcore 43 as core 9 on socket 1 00:05:54.735 EAL: Detected lcore 44 as core 10 on socket 1 00:05:54.735 EAL: Detected lcore 45 as core 11 on socket 1 00:05:54.735 EAL: Detected lcore 46 as core 12 on socket 1 00:05:54.735 EAL: Detected lcore 47 as core 13 on socket 1 00:05:54.735 EAL: Maximum logical cores by configuration: 128 00:05:54.735 EAL: Detected CPU lcores: 48 00:05:54.735 EAL: Detected NUMA nodes: 2 00:05:54.735 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:54.735 EAL: Detected shared linkage of DPDK 00:05:54.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:54.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:54.735 EAL: Registered [vdev] bus. 00:05:54.735 EAL: bus.vdev log level changed from disabled to notice 00:05:54.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:54.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:54.735 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:54.735 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:54.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:54.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:54.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:54.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:54.735 EAL: No shared files mode enabled, IPC will be disabled 00:05:54.993 EAL: No shared files mode enabled, IPC is disabled 00:05:54.993 EAL: Bus pci wants IOVA as 'DC' 00:05:54.993 EAL: Bus vdev wants IOVA as 'DC' 00:05:54.993 EAL: Buses did not request a specific IOVA mode. 00:05:54.993 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:54.993 EAL: Selected IOVA mode 'VA' 00:05:54.993 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.993 EAL: Probing VFIO support... 00:05:54.993 EAL: IOMMU type 1 (Type 1) is supported 00:05:54.993 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:54.993 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:54.993 EAL: VFIO support initialized 00:05:54.993 EAL: Ask a virtual area of 0x2e000 bytes 00:05:54.993 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:54.993 EAL: Setting up physically contiguous memory... 00:05:54.993 EAL: Setting maximum number of open files to 524288 00:05:54.993 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:54.993 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:54.993 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:54.993 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.993 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:54.994 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.994 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.994 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:54.994 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:54.994 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.994 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:54.994 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.994 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.994 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:54.994 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:54.994 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.994 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:54.994 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.994 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.994 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:54.994 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:54.994 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.994 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:54.994 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.994 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.994 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:54.994 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:54.994 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:54.994 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.994 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:54.994 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:54.994 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.994 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:54.994 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:54.994 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.994 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:54.994 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:54.994 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.994 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:54.994 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:54.994 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.994 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:54.994 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:54.994 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.994 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:54.994 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:54.994 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.994 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:54.994 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:54.994 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.994 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:54.994 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:54.994 EAL: Hugepages will be freed exactly as allocated. 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: TSC frequency is ~2700000 KHz 00:05:54.994 EAL: Main lcore 0 is ready (tid=7f77d6f96a00;cpuset=[0]) 00:05:54.994 EAL: Trying to obtain current memory policy. 00:05:54.994 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.994 EAL: Restoring previous memory policy: 0 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was expanded by 2MB 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:54.994 EAL: Mem event callback 'spdk:(nil)' registered 00:05:54.994 00:05:54.994 00:05:54.994 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.994 http://cunit.sourceforge.net/ 00:05:54.994 00:05:54.994 00:05:54.994 Suite: components_suite 00:05:54.994 Test: vtophys_malloc_test ...passed 00:05:54.994 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:54.994 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.994 EAL: Restoring previous memory policy: 4 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was expanded by 4MB 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was shrunk by 4MB 00:05:54.994 EAL: Trying to obtain current memory policy. 00:05:54.994 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.994 EAL: Restoring previous memory policy: 4 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was expanded by 6MB 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was shrunk by 6MB 00:05:54.994 EAL: Trying to obtain current memory policy. 00:05:54.994 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.994 EAL: Restoring previous memory policy: 4 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was expanded by 10MB 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was shrunk by 10MB 00:05:54.994 EAL: Trying to obtain current memory policy. 00:05:54.994 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.994 EAL: Restoring previous memory policy: 4 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was expanded by 18MB 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was shrunk by 18MB 00:05:54.994 EAL: Trying to obtain current memory policy. 00:05:54.994 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.994 EAL: Restoring previous memory policy: 4 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was expanded by 34MB 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was shrunk by 34MB 00:05:54.994 EAL: Trying to obtain current memory policy. 00:05:54.994 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.994 EAL: Restoring previous memory policy: 4 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was expanded by 66MB 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was shrunk by 66MB 00:05:54.994 EAL: Trying to obtain current memory policy. 00:05:54.994 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.994 EAL: Restoring previous memory policy: 4 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was expanded by 130MB 00:05:54.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.994 EAL: request: mp_malloc_sync 00:05:54.994 EAL: No shared files mode enabled, IPC is disabled 00:05:54.994 EAL: Heap on socket 0 was shrunk by 130MB 00:05:54.994 EAL: Trying to obtain current memory policy. 00:05:54.994 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.252 EAL: Restoring previous memory policy: 4 00:05:55.252 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.252 EAL: request: mp_malloc_sync 00:05:55.252 EAL: No shared files mode enabled, IPC is disabled 00:05:55.252 EAL: Heap on socket 0 was expanded by 258MB 00:05:55.252 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.252 EAL: request: mp_malloc_sync 00:05:55.252 EAL: No shared files mode enabled, IPC is disabled 00:05:55.252 EAL: Heap on socket 0 was shrunk by 258MB 00:05:55.252 EAL: Trying to obtain current memory policy. 00:05:55.252 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.509 EAL: Restoring previous memory policy: 4 00:05:55.509 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.509 EAL: request: mp_malloc_sync 00:05:55.509 EAL: No shared files mode enabled, IPC is disabled 00:05:55.509 EAL: Heap on socket 0 was expanded by 514MB 00:05:55.509 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.509 EAL: request: mp_malloc_sync 00:05:55.509 EAL: No shared files mode enabled, IPC is disabled 00:05:55.509 EAL: Heap on socket 0 was shrunk by 514MB 00:05:55.510 EAL: Trying to obtain current memory policy. 00:05:55.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.766 EAL: Restoring previous memory policy: 4 00:05:55.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.767 EAL: request: mp_malloc_sync 00:05:55.767 EAL: No shared files mode enabled, IPC is disabled 00:05:55.767 EAL: Heap on socket 0 was expanded by 1026MB 00:05:56.023 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.281 EAL: request: mp_malloc_sync 00:05:56.281 EAL: No shared files mode enabled, IPC is disabled 00:05:56.281 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:56.281 passed 00:05:56.281 00:05:56.281 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.281 suites 1 1 n/a 0 0 00:05:56.281 tests 2 2 2 0 0 00:05:56.281 asserts 497 497 497 0 n/a 00:05:56.281 00:05:56.281 Elapsed time = 1.381 seconds 00:05:56.281 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.281 EAL: request: mp_malloc_sync 00:05:56.282 EAL: No shared files mode enabled, IPC is disabled 00:05:56.282 EAL: Heap on socket 0 was shrunk by 2MB 00:05:56.282 EAL: No shared files mode enabled, IPC is disabled 00:05:56.282 EAL: No shared files mode enabled, IPC is disabled 00:05:56.282 EAL: No shared files mode enabled, IPC is disabled 00:05:56.282 00:05:56.282 real 0m1.501s 00:05:56.282 user 0m0.860s 00:05:56.282 sys 0m0.603s 00:05:56.282 05:25:49 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.282 05:25:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:56.282 ************************************ 00:05:56.282 END TEST env_vtophys 00:05:56.282 ************************************ 00:05:56.282 05:25:49 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:56.282 05:25:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.282 05:25:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.282 05:25:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.282 ************************************ 00:05:56.282 START TEST env_pci 00:05:56.282 ************************************ 00:05:56.282 05:25:49 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:56.282 00:05:56.282 00:05:56.282 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.282 http://cunit.sourceforge.net/ 00:05:56.282 00:05:56.282 00:05:56.282 Suite: pci 00:05:56.282 Test: pci_hook ...[2024-07-25 05:25:49.957196] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1493783 has claimed it 00:05:56.282 EAL: Cannot find device (10000:00:01.0) 00:05:56.282 EAL: Failed to attach device on primary process 00:05:56.282 passed 00:05:56.282 00:05:56.282 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.282 suites 1 1 n/a 0 0 00:05:56.282 tests 1 1 1 0 0 00:05:56.282 asserts 25 25 25 0 n/a 00:05:56.282 00:05:56.282 Elapsed time = 0.021 seconds 00:05:56.282 00:05:56.282 real 0m0.033s 00:05:56.282 user 0m0.011s 00:05:56.282 sys 0m0.022s 00:05:56.282 05:25:49 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.282 05:25:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:56.282 ************************************ 00:05:56.282 END TEST env_pci 00:05:56.282 ************************************ 00:05:56.540 05:25:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:56.540 05:25:50 env -- env/env.sh@15 -- # uname 00:05:56.540 05:25:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:56.540 05:25:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:56.540 05:25:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.540 05:25:50 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:56.540 05:25:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.540 05:25:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.540 ************************************ 00:05:56.540 START TEST env_dpdk_post_init 00:05:56.540 ************************************ 00:05:56.540 05:25:50 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.540 EAL: Detected CPU lcores: 48 00:05:56.540 EAL: Detected NUMA nodes: 2 00:05:56.540 EAL: Detected shared linkage of DPDK 00:05:56.540 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.540 EAL: Selected IOVA mode 'VA' 00:05:56.540 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.540 EAL: VFIO support initialized 00:05:56.540 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:56.540 EAL: Using IOMMU type 1 (Type 1) 00:05:56.540 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:56.540 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:56.540 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:56.540 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:56.540 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:56.540 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:56.540 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:56.541 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:56.799 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:56.799 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:56.799 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:56.799 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:56.799 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:56.799 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:56.799 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:56.799 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:57.731 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:01.012 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:01.012 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:01.012 Starting DPDK initialization... 00:06:01.012 Starting SPDK post initialization... 00:06:01.012 SPDK NVMe probe 00:06:01.012 Attaching to 0000:88:00.0 00:06:01.012 Attached to 0000:88:00.0 00:06:01.012 Cleaning up... 00:06:01.012 00:06:01.012 real 0m4.390s 00:06:01.012 user 0m3.234s 00:06:01.012 sys 0m0.217s 00:06:01.012 05:25:54 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.012 05:25:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.012 ************************************ 00:06:01.012 END TEST env_dpdk_post_init 00:06:01.012 ************************************ 00:06:01.012 05:25:54 env -- env/env.sh@26 -- # uname 00:06:01.012 05:25:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:01.012 05:25:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.012 05:25:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.012 05:25:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.012 05:25:54 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.012 ************************************ 00:06:01.012 START TEST env_mem_callbacks 00:06:01.012 ************************************ 00:06:01.012 05:25:54 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.012 EAL: Detected CPU lcores: 48 00:06:01.012 EAL: Detected NUMA nodes: 2 00:06:01.012 EAL: Detected shared linkage of DPDK 00:06:01.012 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:01.012 EAL: Selected IOVA mode 'VA' 00:06:01.012 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.012 EAL: VFIO support initialized 00:06:01.012 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:01.012 00:06:01.012 00:06:01.012 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.012 http://cunit.sourceforge.net/ 00:06:01.012 00:06:01.012 00:06:01.012 Suite: memory 00:06:01.012 Test: test ... 00:06:01.012 register 0x200000200000 2097152 00:06:01.012 malloc 3145728 00:06:01.012 register 0x200000400000 4194304 00:06:01.012 buf 0x200000500000 len 3145728 PASSED 00:06:01.012 malloc 64 00:06:01.012 buf 0x2000004fff40 len 64 PASSED 00:06:01.012 malloc 4194304 00:06:01.012 register 0x200000800000 6291456 00:06:01.012 buf 0x200000a00000 len 4194304 PASSED 00:06:01.012 free 0x200000500000 3145728 00:06:01.012 free 0x2000004fff40 64 00:06:01.012 unregister 0x200000400000 4194304 PASSED 00:06:01.012 free 0x200000a00000 4194304 00:06:01.012 unregister 0x200000800000 6291456 PASSED 00:06:01.012 malloc 8388608 00:06:01.012 register 0x200000400000 10485760 00:06:01.012 buf 0x200000600000 len 8388608 PASSED 00:06:01.012 free 0x200000600000 8388608 00:06:01.012 unregister 0x200000400000 10485760 PASSED 00:06:01.012 passed 00:06:01.012 00:06:01.012 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.012 suites 1 1 n/a 0 0 00:06:01.012 tests 1 1 1 0 0 00:06:01.012 asserts 15 15 15 0 n/a 00:06:01.012 00:06:01.012 Elapsed time = 0.005 seconds 00:06:01.012 00:06:01.012 real 0m0.048s 00:06:01.012 user 0m0.014s 00:06:01.012 sys 0m0.034s 00:06:01.012 05:25:54 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.012 05:25:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:01.012 ************************************ 00:06:01.012 END TEST env_mem_callbacks 00:06:01.012 ************************************ 00:06:01.012 00:06:01.012 real 0m6.412s 00:06:01.012 user 0m4.379s 00:06:01.012 sys 0m1.074s 00:06:01.012 05:25:54 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.012 05:25:54 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.012 ************************************ 00:06:01.012 END TEST env 00:06:01.012 ************************************ 00:06:01.012 05:25:54 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:01.012 05:25:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.012 05:25:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.012 05:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:01.012 ************************************ 00:06:01.012 START TEST rpc 00:06:01.012 ************************************ 00:06:01.012 05:25:54 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:01.012 * Looking for test storage... 00:06:01.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:01.012 05:25:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1494471 00:06:01.012 05:25:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:01.012 05:25:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.012 05:25:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1494471 00:06:01.012 05:25:54 rpc -- common/autotest_common.sh@831 -- # '[' -z 1494471 ']' 00:06:01.012 05:25:54 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.012 05:25:54 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.012 05:25:54 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.012 05:25:54 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.012 05:25:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.012 [2024-07-25 05:25:54.694828] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:01.012 [2024-07-25 05:25:54.694914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494471 ] 00:06:01.270 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.270 [2024-07-25 05:25:54.756074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.270 [2024-07-25 05:25:54.841893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:01.270 [2024-07-25 05:25:54.841959] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1494471' to capture a snapshot of events at runtime. 00:06:01.270 [2024-07-25 05:25:54.841974] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:01.270 [2024-07-25 05:25:54.841997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:01.270 [2024-07-25 05:25:54.842007] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1494471 for offline analysis/debug. 00:06:01.270 [2024-07-25 05:25:54.842041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.529 05:25:55 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.529 05:25:55 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:01.529 05:25:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:01.529 05:25:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:01.529 05:25:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:01.529 05:25:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:01.529 05:25:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.529 05:25:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.529 05:25:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.529 ************************************ 00:06:01.529 START TEST rpc_integrity 00:06:01.529 ************************************ 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:01.529 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.529 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:01.529 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:01.529 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:01.529 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.529 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:01.529 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.529 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:01.529 { 00:06:01.529 "name": "Malloc0", 00:06:01.529 "aliases": [ 00:06:01.529 "4925f51b-786d-4302-a21d-f432c64fedc3" 00:06:01.529 ], 00:06:01.529 "product_name": "Malloc disk", 00:06:01.529 "block_size": 512, 00:06:01.529 "num_blocks": 16384, 00:06:01.529 "uuid": "4925f51b-786d-4302-a21d-f432c64fedc3", 00:06:01.529 "assigned_rate_limits": { 00:06:01.529 "rw_ios_per_sec": 0, 00:06:01.529 "rw_mbytes_per_sec": 0, 00:06:01.529 "r_mbytes_per_sec": 0, 00:06:01.529 "w_mbytes_per_sec": 0 00:06:01.529 }, 00:06:01.529 "claimed": false, 00:06:01.529 "zoned": false, 00:06:01.529 "supported_io_types": { 00:06:01.529 "read": true, 00:06:01.529 "write": true, 00:06:01.529 "unmap": true, 00:06:01.529 "flush": true, 00:06:01.529 "reset": true, 00:06:01.529 "nvme_admin": false, 00:06:01.529 "nvme_io": false, 00:06:01.529 "nvme_io_md": false, 00:06:01.529 "write_zeroes": true, 00:06:01.529 "zcopy": true, 00:06:01.529 "get_zone_info": false, 00:06:01.529 "zone_management": false, 00:06:01.529 "zone_append": false, 00:06:01.529 "compare": false, 00:06:01.529 "compare_and_write": false, 00:06:01.529 "abort": true, 00:06:01.529 "seek_hole": false, 00:06:01.529 "seek_data": false, 00:06:01.529 "copy": true, 00:06:01.529 "nvme_iov_md": false 00:06:01.529 }, 00:06:01.529 "memory_domains": [ 00:06:01.529 { 00:06:01.529 "dma_device_id": "system", 00:06:01.529 "dma_device_type": 1 00:06:01.529 }, 00:06:01.529 { 00:06:01.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.529 "dma_device_type": 2 00:06:01.529 } 00:06:01.529 ], 00:06:01.529 "driver_specific": {} 00:06:01.529 } 00:06:01.529 ]' 00:06:01.529 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:01.529 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:01.529 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.529 [2024-07-25 05:25:55.222891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:01.529 [2024-07-25 05:25:55.222937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.529 [2024-07-25 05:25:55.222963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x808af0 00:06:01.529 [2024-07-25 05:25:55.222979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.529 [2024-07-25 05:25:55.224470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.529 [2024-07-25 05:25:55.224497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:01.529 Passthru0 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.529 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.529 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.787 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.787 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:01.787 { 00:06:01.787 "name": "Malloc0", 00:06:01.787 "aliases": [ 00:06:01.787 "4925f51b-786d-4302-a21d-f432c64fedc3" 00:06:01.787 ], 00:06:01.787 "product_name": "Malloc disk", 00:06:01.787 "block_size": 512, 00:06:01.787 "num_blocks": 16384, 00:06:01.787 "uuid": "4925f51b-786d-4302-a21d-f432c64fedc3", 00:06:01.787 "assigned_rate_limits": { 00:06:01.787 "rw_ios_per_sec": 0, 00:06:01.788 "rw_mbytes_per_sec": 0, 00:06:01.788 "r_mbytes_per_sec": 0, 00:06:01.788 "w_mbytes_per_sec": 0 00:06:01.788 }, 00:06:01.788 "claimed": true, 00:06:01.788 "claim_type": "exclusive_write", 00:06:01.788 "zoned": false, 00:06:01.788 "supported_io_types": { 00:06:01.788 "read": true, 00:06:01.788 "write": true, 00:06:01.788 "unmap": true, 00:06:01.788 "flush": true, 00:06:01.788 "reset": true, 00:06:01.788 "nvme_admin": false, 00:06:01.788 "nvme_io": false, 00:06:01.788 "nvme_io_md": false, 00:06:01.788 "write_zeroes": true, 00:06:01.788 "zcopy": true, 00:06:01.788 "get_zone_info": false, 00:06:01.788 "zone_management": false, 00:06:01.788 "zone_append": false, 00:06:01.788 "compare": false, 00:06:01.788 "compare_and_write": false, 00:06:01.788 "abort": true, 00:06:01.788 "seek_hole": false, 00:06:01.788 "seek_data": false, 00:06:01.788 "copy": true, 00:06:01.788 "nvme_iov_md": false 00:06:01.788 }, 00:06:01.788 "memory_domains": [ 00:06:01.788 { 00:06:01.788 "dma_device_id": "system", 00:06:01.788 "dma_device_type": 1 00:06:01.788 }, 00:06:01.788 { 00:06:01.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.788 "dma_device_type": 2 00:06:01.788 } 00:06:01.788 ], 00:06:01.788 "driver_specific": {} 00:06:01.788 }, 00:06:01.788 { 00:06:01.788 "name": "Passthru0", 00:06:01.788 "aliases": [ 00:06:01.788 "24769f48-df95-570f-bb19-f176eff5d004" 00:06:01.788 ], 00:06:01.788 "product_name": "passthru", 00:06:01.788 "block_size": 512, 00:06:01.788 "num_blocks": 16384, 00:06:01.788 "uuid": "24769f48-df95-570f-bb19-f176eff5d004", 00:06:01.788 "assigned_rate_limits": { 00:06:01.788 "rw_ios_per_sec": 0, 00:06:01.788 "rw_mbytes_per_sec": 0, 00:06:01.788 "r_mbytes_per_sec": 0, 00:06:01.788 "w_mbytes_per_sec": 0 00:06:01.788 }, 00:06:01.788 "claimed": false, 00:06:01.788 "zoned": false, 00:06:01.788 "supported_io_types": { 00:06:01.788 "read": true, 00:06:01.788 "write": true, 00:06:01.788 "unmap": true, 00:06:01.788 "flush": true, 00:06:01.788 "reset": true, 00:06:01.788 "nvme_admin": false, 00:06:01.788 "nvme_io": false, 00:06:01.788 "nvme_io_md": false, 00:06:01.788 "write_zeroes": true, 00:06:01.788 "zcopy": true, 00:06:01.788 "get_zone_info": false, 00:06:01.788 "zone_management": false, 00:06:01.788 "zone_append": false, 00:06:01.788 "compare": false, 00:06:01.788 "compare_and_write": false, 00:06:01.788 "abort": true, 00:06:01.788 "seek_hole": false, 00:06:01.788 "seek_data": false, 00:06:01.788 "copy": true, 00:06:01.788 "nvme_iov_md": false 00:06:01.788 }, 00:06:01.788 "memory_domains": [ 00:06:01.788 { 00:06:01.788 "dma_device_id": "system", 00:06:01.788 "dma_device_type": 1 00:06:01.788 }, 00:06:01.788 { 00:06:01.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.788 "dma_device_type": 2 00:06:01.788 } 00:06:01.788 ], 00:06:01.788 "driver_specific": { 00:06:01.788 "passthru": { 00:06:01.788 "name": "Passthru0", 00:06:01.788 "base_bdev_name": "Malloc0" 00:06:01.788 } 00:06:01.788 } 00:06:01.788 } 00:06:01.788 ]' 00:06:01.788 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:01.788 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:01.788 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:01.788 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.788 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.788 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:01.788 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.788 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.788 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:01.788 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.788 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.788 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:01.788 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:01.788 05:25:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:01.788 00:06:01.788 real 0m0.236s 00:06:01.788 user 0m0.155s 00:06:01.788 sys 0m0.021s 00:06:01.788 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.788 05:25:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 ************************************ 00:06:01.788 END TEST rpc_integrity 00:06:01.788 ************************************ 00:06:01.788 05:25:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:01.788 05:25:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.788 05:25:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.788 05:25:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 ************************************ 00:06:01.788 START TEST rpc_plugins 00:06:01.788 ************************************ 00:06:01.788 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:01.788 05:25:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:01.788 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.788 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.788 05:25:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:01.788 05:25:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:01.788 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.788 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.788 05:25:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:01.788 { 00:06:01.788 "name": "Malloc1", 00:06:01.788 "aliases": [ 00:06:01.788 "610b97b9-47e1-4af8-85b1-6c5b0eca9741" 00:06:01.788 ], 00:06:01.788 "product_name": "Malloc disk", 00:06:01.788 "block_size": 4096, 00:06:01.788 "num_blocks": 256, 00:06:01.788 "uuid": "610b97b9-47e1-4af8-85b1-6c5b0eca9741", 00:06:01.788 "assigned_rate_limits": { 00:06:01.788 "rw_ios_per_sec": 0, 00:06:01.788 "rw_mbytes_per_sec": 0, 00:06:01.788 "r_mbytes_per_sec": 0, 00:06:01.788 "w_mbytes_per_sec": 0 00:06:01.788 }, 00:06:01.788 "claimed": false, 00:06:01.788 "zoned": false, 00:06:01.788 "supported_io_types": { 00:06:01.788 "read": true, 00:06:01.788 "write": true, 00:06:01.788 "unmap": true, 00:06:01.788 "flush": true, 00:06:01.788 "reset": true, 00:06:01.788 "nvme_admin": false, 00:06:01.788 "nvme_io": false, 00:06:01.788 "nvme_io_md": false, 00:06:01.788 "write_zeroes": true, 00:06:01.788 "zcopy": true, 00:06:01.788 "get_zone_info": false, 00:06:01.788 "zone_management": false, 00:06:01.788 "zone_append": false, 00:06:01.788 "compare": false, 00:06:01.788 "compare_and_write": false, 00:06:01.788 "abort": true, 00:06:01.788 "seek_hole": false, 00:06:01.788 "seek_data": false, 00:06:01.788 "copy": true, 00:06:01.788 "nvme_iov_md": false 00:06:01.788 }, 00:06:01.788 "memory_domains": [ 00:06:01.788 { 00:06:01.788 "dma_device_id": "system", 00:06:01.788 "dma_device_type": 1 00:06:01.788 }, 00:06:01.788 { 00:06:01.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.788 "dma_device_type": 2 00:06:01.788 } 00:06:01.788 ], 00:06:01.788 "driver_specific": {} 00:06:01.788 } 00:06:01.788 ]' 00:06:01.788 05:25:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:01.788 05:25:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:01.788 05:25:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:01.788 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.788 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.788 05:25:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:01.788 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.788 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.788 05:25:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:01.788 05:25:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:02.046 05:25:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:02.046 00:06:02.046 real 0m0.116s 00:06:02.046 user 0m0.074s 00:06:02.046 sys 0m0.012s 00:06:02.046 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.046 05:25:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.046 ************************************ 00:06:02.046 END TEST rpc_plugins 00:06:02.046 ************************************ 00:06:02.046 05:25:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:02.046 05:25:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.046 05:25:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.046 05:25:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.046 ************************************ 00:06:02.046 START TEST rpc_trace_cmd_test 00:06:02.046 ************************************ 00:06:02.046 05:25:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:02.046 05:25:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:02.046 05:25:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:02.046 05:25:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.046 05:25:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.046 05:25:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.046 05:25:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:02.046 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1494471", 00:06:02.046 "tpoint_group_mask": "0x8", 00:06:02.046 "iscsi_conn": { 00:06:02.046 "mask": "0x2", 00:06:02.046 "tpoint_mask": "0x0" 00:06:02.046 }, 00:06:02.046 "scsi": { 00:06:02.046 "mask": "0x4", 00:06:02.046 "tpoint_mask": "0x0" 00:06:02.046 }, 00:06:02.046 "bdev": { 00:06:02.046 "mask": "0x8", 00:06:02.046 "tpoint_mask": "0xffffffffffffffff" 00:06:02.046 }, 00:06:02.046 "nvmf_rdma": { 00:06:02.046 "mask": "0x10", 00:06:02.046 "tpoint_mask": "0x0" 00:06:02.046 }, 00:06:02.046 "nvmf_tcp": { 00:06:02.046 "mask": "0x20", 00:06:02.046 "tpoint_mask": "0x0" 00:06:02.046 }, 00:06:02.046 "ftl": { 00:06:02.046 "mask": "0x40", 00:06:02.046 "tpoint_mask": "0x0" 00:06:02.046 }, 00:06:02.046 "blobfs": { 00:06:02.046 "mask": "0x80", 00:06:02.046 "tpoint_mask": "0x0" 00:06:02.046 }, 00:06:02.046 "dsa": { 00:06:02.046 "mask": "0x200", 00:06:02.046 "tpoint_mask": "0x0" 00:06:02.046 }, 00:06:02.046 "thread": { 00:06:02.046 "mask": "0x400", 00:06:02.046 "tpoint_mask": "0x0" 00:06:02.046 }, 00:06:02.046 "nvme_pcie": { 00:06:02.046 "mask": "0x800", 00:06:02.046 "tpoint_mask": "0x0" 00:06:02.047 }, 00:06:02.047 "iaa": { 00:06:02.047 "mask": "0x1000", 00:06:02.047 "tpoint_mask": "0x0" 00:06:02.047 }, 00:06:02.047 "nvme_tcp": { 00:06:02.047 "mask": "0x2000", 00:06:02.047 "tpoint_mask": "0x0" 00:06:02.047 }, 00:06:02.047 "bdev_nvme": { 00:06:02.047 "mask": "0x4000", 00:06:02.047 "tpoint_mask": "0x0" 00:06:02.047 }, 00:06:02.047 "sock": { 00:06:02.047 "mask": "0x8000", 00:06:02.047 "tpoint_mask": "0x0" 00:06:02.047 } 00:06:02.047 }' 00:06:02.047 05:25:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:02.047 05:25:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:02.047 05:25:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:02.047 05:25:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:02.047 05:25:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:02.047 05:25:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:02.047 05:25:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:02.047 05:25:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:02.047 05:25:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:02.047 05:25:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:02.047 00:06:02.047 real 0m0.187s 00:06:02.047 user 0m0.165s 00:06:02.047 sys 0m0.015s 00:06:02.047 05:25:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.047 05:25:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.047 ************************************ 00:06:02.047 END TEST rpc_trace_cmd_test 00:06:02.047 ************************************ 00:06:02.305 05:25:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:02.305 05:25:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:02.305 05:25:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:02.305 05:25:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.305 05:25:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.305 05:25:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.305 ************************************ 00:06:02.305 START TEST rpc_daemon_integrity 00:06:02.305 ************************************ 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:02.305 { 00:06:02.305 "name": "Malloc2", 00:06:02.305 "aliases": [ 00:06:02.305 "4c11a344-388f-4dd5-b944-8fa684a4dff9" 00:06:02.305 ], 00:06:02.305 "product_name": "Malloc disk", 00:06:02.305 "block_size": 512, 00:06:02.305 "num_blocks": 16384, 00:06:02.305 "uuid": "4c11a344-388f-4dd5-b944-8fa684a4dff9", 00:06:02.305 "assigned_rate_limits": { 00:06:02.305 "rw_ios_per_sec": 0, 00:06:02.305 "rw_mbytes_per_sec": 0, 00:06:02.305 "r_mbytes_per_sec": 0, 00:06:02.305 "w_mbytes_per_sec": 0 00:06:02.305 }, 00:06:02.305 "claimed": false, 00:06:02.305 "zoned": false, 00:06:02.305 "supported_io_types": { 00:06:02.305 "read": true, 00:06:02.305 "write": true, 00:06:02.305 "unmap": true, 00:06:02.305 "flush": true, 00:06:02.305 "reset": true, 00:06:02.305 "nvme_admin": false, 00:06:02.305 "nvme_io": false, 00:06:02.305 "nvme_io_md": false, 00:06:02.305 "write_zeroes": true, 00:06:02.305 "zcopy": true, 00:06:02.305 "get_zone_info": false, 00:06:02.305 "zone_management": false, 00:06:02.305 "zone_append": false, 00:06:02.305 "compare": false, 00:06:02.305 "compare_and_write": false, 00:06:02.305 "abort": true, 00:06:02.305 "seek_hole": false, 00:06:02.305 "seek_data": false, 00:06:02.305 "copy": true, 00:06:02.305 "nvme_iov_md": false 00:06:02.305 }, 00:06:02.305 "memory_domains": [ 00:06:02.305 { 00:06:02.305 "dma_device_id": "system", 00:06:02.305 "dma_device_type": 1 00:06:02.305 }, 00:06:02.305 { 00:06:02.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.305 "dma_device_type": 2 00:06:02.305 } 00:06:02.305 ], 00:06:02.305 "driver_specific": {} 00:06:02.305 } 00:06:02.305 ]' 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.305 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.306 [2024-07-25 05:25:55.892982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:02.306 [2024-07-25 05:25:55.893029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:02.306 [2024-07-25 05:25:55.893055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x658290 00:06:02.306 [2024-07-25 05:25:55.893071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:02.306 [2024-07-25 05:25:55.894435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:02.306 [2024-07-25 05:25:55.894462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:02.306 Passthru0 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:02.306 { 00:06:02.306 "name": "Malloc2", 00:06:02.306 "aliases": [ 00:06:02.306 "4c11a344-388f-4dd5-b944-8fa684a4dff9" 00:06:02.306 ], 00:06:02.306 "product_name": "Malloc disk", 00:06:02.306 "block_size": 512, 00:06:02.306 "num_blocks": 16384, 00:06:02.306 "uuid": "4c11a344-388f-4dd5-b944-8fa684a4dff9", 00:06:02.306 "assigned_rate_limits": { 00:06:02.306 "rw_ios_per_sec": 0, 00:06:02.306 "rw_mbytes_per_sec": 0, 00:06:02.306 "r_mbytes_per_sec": 0, 00:06:02.306 "w_mbytes_per_sec": 0 00:06:02.306 }, 00:06:02.306 "claimed": true, 00:06:02.306 "claim_type": "exclusive_write", 00:06:02.306 "zoned": false, 00:06:02.306 "supported_io_types": { 00:06:02.306 "read": true, 00:06:02.306 "write": true, 00:06:02.306 "unmap": true, 00:06:02.306 "flush": true, 00:06:02.306 "reset": true, 00:06:02.306 "nvme_admin": false, 00:06:02.306 "nvme_io": false, 00:06:02.306 "nvme_io_md": false, 00:06:02.306 "write_zeroes": true, 00:06:02.306 "zcopy": true, 00:06:02.306 "get_zone_info": false, 00:06:02.306 "zone_management": false, 00:06:02.306 "zone_append": false, 00:06:02.306 "compare": false, 00:06:02.306 "compare_and_write": false, 00:06:02.306 "abort": true, 00:06:02.306 "seek_hole": false, 00:06:02.306 "seek_data": false, 00:06:02.306 "copy": true, 00:06:02.306 "nvme_iov_md": false 00:06:02.306 }, 00:06:02.306 "memory_domains": [ 00:06:02.306 { 00:06:02.306 "dma_device_id": "system", 00:06:02.306 "dma_device_type": 1 00:06:02.306 }, 00:06:02.306 { 00:06:02.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.306 "dma_device_type": 2 00:06:02.306 } 00:06:02.306 ], 00:06:02.306 "driver_specific": {} 00:06:02.306 }, 00:06:02.306 { 00:06:02.306 "name": "Passthru0", 00:06:02.306 "aliases": [ 00:06:02.306 "8a620f74-af7c-5d26-b3ba-46e8c76eb11d" 00:06:02.306 ], 00:06:02.306 "product_name": "passthru", 00:06:02.306 "block_size": 512, 00:06:02.306 "num_blocks": 16384, 00:06:02.306 "uuid": "8a620f74-af7c-5d26-b3ba-46e8c76eb11d", 00:06:02.306 "assigned_rate_limits": { 00:06:02.306 "rw_ios_per_sec": 0, 00:06:02.306 "rw_mbytes_per_sec": 0, 00:06:02.306 "r_mbytes_per_sec": 0, 00:06:02.306 "w_mbytes_per_sec": 0 00:06:02.306 }, 00:06:02.306 "claimed": false, 00:06:02.306 "zoned": false, 00:06:02.306 "supported_io_types": { 00:06:02.306 "read": true, 00:06:02.306 "write": true, 00:06:02.306 "unmap": true, 00:06:02.306 "flush": true, 00:06:02.306 "reset": true, 00:06:02.306 "nvme_admin": false, 00:06:02.306 "nvme_io": false, 00:06:02.306 "nvme_io_md": false, 00:06:02.306 "write_zeroes": true, 00:06:02.306 "zcopy": true, 00:06:02.306 "get_zone_info": false, 00:06:02.306 "zone_management": false, 00:06:02.306 "zone_append": false, 00:06:02.306 "compare": false, 00:06:02.306 "compare_and_write": false, 00:06:02.306 "abort": true, 00:06:02.306 "seek_hole": false, 00:06:02.306 "seek_data": false, 00:06:02.306 "copy": true, 00:06:02.306 "nvme_iov_md": false 00:06:02.306 }, 00:06:02.306 "memory_domains": [ 00:06:02.306 { 00:06:02.306 "dma_device_id": "system", 00:06:02.306 "dma_device_type": 1 00:06:02.306 }, 00:06:02.306 { 00:06:02.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.306 "dma_device_type": 2 00:06:02.306 } 00:06:02.306 ], 00:06:02.306 "driver_specific": { 00:06:02.306 "passthru": { 00:06:02.306 "name": "Passthru0", 00:06:02.306 "base_bdev_name": "Malloc2" 00:06:02.306 } 00:06:02.306 } 00:06:02.306 } 00:06:02.306 ]' 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:02.306 05:25:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:02.565 05:25:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:02.565 00:06:02.565 real 0m0.225s 00:06:02.565 user 0m0.146s 00:06:02.565 sys 0m0.023s 00:06:02.565 05:25:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.565 05:25:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.565 ************************************ 00:06:02.565 END TEST rpc_daemon_integrity 00:06:02.565 ************************************ 00:06:02.565 05:25:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:02.565 05:25:56 rpc -- rpc/rpc.sh@84 -- # killprocess 1494471 00:06:02.565 05:25:56 rpc -- common/autotest_common.sh@950 -- # '[' -z 1494471 ']' 00:06:02.565 05:25:56 rpc -- common/autotest_common.sh@954 -- # kill -0 1494471 00:06:02.565 05:25:56 rpc -- common/autotest_common.sh@955 -- # uname 00:06:02.565 05:25:56 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.565 05:25:56 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1494471 00:06:02.565 05:25:56 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.565 05:25:56 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.565 05:25:56 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1494471' 00:06:02.565 killing process with pid 1494471 00:06:02.565 05:25:56 rpc -- common/autotest_common.sh@969 -- # kill 1494471 00:06:02.565 05:25:56 rpc -- common/autotest_common.sh@974 -- # wait 1494471 00:06:02.824 00:06:02.824 real 0m1.872s 00:06:02.824 user 0m2.340s 00:06:02.824 sys 0m0.601s 00:06:02.824 05:25:56 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.824 05:25:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.824 ************************************ 00:06:02.824 END TEST rpc 00:06:02.824 ************************************ 00:06:02.824 05:25:56 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:02.825 05:25:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.825 05:25:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.825 05:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:02.825 ************************************ 00:06:02.825 START TEST skip_rpc 00:06:02.825 ************************************ 00:06:02.825 05:25:56 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:03.082 * Looking for test storage... 00:06:03.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:03.083 05:25:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:03.083 05:25:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:03.083 05:25:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:03.083 05:25:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.083 05:25:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.083 05:25:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.083 ************************************ 00:06:03.083 START TEST skip_rpc 00:06:03.083 ************************************ 00:06:03.083 05:25:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:03.083 05:25:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1494872 00:06:03.083 05:25:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:03.083 05:25:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.083 05:25:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:03.083 [2024-07-25 05:25:56.637051] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:03.083 [2024-07-25 05:25:56.637111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494872 ] 00:06:03.083 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.083 [2024-07-25 05:25:56.696689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.340 [2024-07-25 05:25:56.788926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1494872 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1494872 ']' 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1494872 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1494872 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1494872' 00:06:08.602 killing process with pid 1494872 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1494872 00:06:08.602 05:26:01 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1494872 00:06:08.602 00:06:08.602 real 0m5.442s 00:06:08.602 user 0m5.129s 00:06:08.602 sys 0m0.314s 00:06:08.602 05:26:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.602 05:26:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.602 ************************************ 00:06:08.602 END TEST skip_rpc 00:06:08.602 ************************************ 00:06:08.602 05:26:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:08.602 05:26:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.602 05:26:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.602 05:26:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.602 ************************************ 00:06:08.602 START TEST skip_rpc_with_json 00:06:08.602 ************************************ 00:06:08.602 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:08.602 05:26:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:08.602 05:26:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1495565 00:06:08.602 05:26:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.602 05:26:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.602 05:26:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1495565 00:06:08.602 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1495565 ']' 00:06:08.602 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.602 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.602 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.603 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.603 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.603 [2024-07-25 05:26:02.133897] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:08.603 [2024-07-25 05:26:02.133983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495565 ] 00:06:08.603 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.603 [2024-07-25 05:26:02.190505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.603 [2024-07-25 05:26:02.278396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.861 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.861 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:08.861 05:26:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:08.861 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.861 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.861 [2024-07-25 05:26:02.532596] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:08.861 request: 00:06:08.861 { 00:06:08.861 "trtype": "tcp", 00:06:08.861 "method": "nvmf_get_transports", 00:06:08.861 "req_id": 1 00:06:08.861 } 00:06:08.861 Got JSON-RPC error response 00:06:08.861 response: 00:06:08.861 { 00:06:08.861 "code": -19, 00:06:08.861 "message": "No such device" 00:06:08.861 } 00:06:08.861 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:08.861 05:26:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:08.861 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.861 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.861 [2024-07-25 05:26:02.540728] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.861 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.861 05:26:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:08.861 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.861 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.120 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.120 05:26:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:09.120 { 00:06:09.120 "subsystems": [ 00:06:09.120 { 00:06:09.120 "subsystem": "vfio_user_target", 00:06:09.120 "config": null 00:06:09.120 }, 00:06:09.120 { 00:06:09.120 "subsystem": "keyring", 00:06:09.120 "config": [] 00:06:09.120 }, 00:06:09.120 { 00:06:09.120 "subsystem": "iobuf", 00:06:09.120 "config": [ 00:06:09.120 { 00:06:09.120 "method": "iobuf_set_options", 00:06:09.120 "params": { 00:06:09.120 "small_pool_count": 8192, 00:06:09.120 "large_pool_count": 1024, 00:06:09.120 "small_bufsize": 8192, 00:06:09.120 "large_bufsize": 135168 00:06:09.120 } 00:06:09.120 } 00:06:09.120 ] 00:06:09.120 }, 00:06:09.120 { 00:06:09.120 "subsystem": "sock", 00:06:09.120 "config": [ 00:06:09.120 { 00:06:09.120 "method": "sock_set_default_impl", 00:06:09.120 "params": { 00:06:09.120 "impl_name": "posix" 00:06:09.120 } 00:06:09.120 }, 00:06:09.120 { 00:06:09.120 "method": "sock_impl_set_options", 00:06:09.120 "params": { 00:06:09.120 "impl_name": "ssl", 00:06:09.120 "recv_buf_size": 4096, 00:06:09.120 "send_buf_size": 4096, 00:06:09.120 "enable_recv_pipe": true, 00:06:09.120 "enable_quickack": false, 00:06:09.120 "enable_placement_id": 0, 00:06:09.120 "enable_zerocopy_send_server": true, 00:06:09.120 "enable_zerocopy_send_client": false, 00:06:09.120 "zerocopy_threshold": 0, 00:06:09.120 "tls_version": 0, 00:06:09.120 "enable_ktls": false 00:06:09.120 } 00:06:09.120 }, 00:06:09.120 { 00:06:09.120 "method": "sock_impl_set_options", 00:06:09.120 "params": { 00:06:09.120 "impl_name": "posix", 00:06:09.120 "recv_buf_size": 2097152, 00:06:09.120 "send_buf_size": 2097152, 00:06:09.120 "enable_recv_pipe": true, 00:06:09.120 "enable_quickack": false, 00:06:09.120 "enable_placement_id": 0, 00:06:09.120 "enable_zerocopy_send_server": true, 00:06:09.120 "enable_zerocopy_send_client": false, 00:06:09.120 "zerocopy_threshold": 0, 00:06:09.120 "tls_version": 0, 00:06:09.120 "enable_ktls": false 00:06:09.120 } 00:06:09.120 } 00:06:09.120 ] 00:06:09.120 }, 00:06:09.120 { 00:06:09.120 "subsystem": "vmd", 00:06:09.120 "config": [] 00:06:09.120 }, 00:06:09.120 { 00:06:09.120 "subsystem": "accel", 00:06:09.120 "config": [ 00:06:09.120 { 00:06:09.120 "method": "accel_set_options", 00:06:09.120 "params": { 00:06:09.120 "small_cache_size": 128, 00:06:09.120 "large_cache_size": 16, 00:06:09.120 "task_count": 2048, 00:06:09.120 "sequence_count": 2048, 00:06:09.120 "buf_count": 2048 00:06:09.120 } 00:06:09.120 } 00:06:09.120 ] 00:06:09.120 }, 00:06:09.120 { 00:06:09.120 "subsystem": "bdev", 00:06:09.120 "config": [ 00:06:09.120 { 00:06:09.120 "method": "bdev_set_options", 00:06:09.120 "params": { 00:06:09.120 "bdev_io_pool_size": 65535, 00:06:09.120 "bdev_io_cache_size": 256, 00:06:09.120 "bdev_auto_examine": true, 00:06:09.120 "iobuf_small_cache_size": 128, 00:06:09.120 "iobuf_large_cache_size": 16 00:06:09.120 } 00:06:09.120 }, 00:06:09.120 { 00:06:09.120 "method": "bdev_raid_set_options", 00:06:09.120 "params": { 00:06:09.120 "process_window_size_kb": 1024, 00:06:09.120 "process_max_bandwidth_mb_sec": 0 00:06:09.120 } 00:06:09.120 }, 00:06:09.120 { 00:06:09.121 "method": "bdev_iscsi_set_options", 00:06:09.121 "params": { 00:06:09.121 "timeout_sec": 30 00:06:09.121 } 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "method": "bdev_nvme_set_options", 00:06:09.121 "params": { 00:06:09.121 "action_on_timeout": "none", 00:06:09.121 "timeout_us": 0, 00:06:09.121 "timeout_admin_us": 0, 00:06:09.121 "keep_alive_timeout_ms": 10000, 00:06:09.121 "arbitration_burst": 0, 00:06:09.121 "low_priority_weight": 0, 00:06:09.121 "medium_priority_weight": 0, 00:06:09.121 "high_priority_weight": 0, 00:06:09.121 "nvme_adminq_poll_period_us": 10000, 00:06:09.121 "nvme_ioq_poll_period_us": 0, 00:06:09.121 "io_queue_requests": 0, 00:06:09.121 "delay_cmd_submit": true, 00:06:09.121 "transport_retry_count": 4, 00:06:09.121 "bdev_retry_count": 3, 00:06:09.121 "transport_ack_timeout": 0, 00:06:09.121 "ctrlr_loss_timeout_sec": 0, 00:06:09.121 "reconnect_delay_sec": 0, 00:06:09.121 "fast_io_fail_timeout_sec": 0, 00:06:09.121 "disable_auto_failback": false, 00:06:09.121 "generate_uuids": false, 00:06:09.121 "transport_tos": 0, 00:06:09.121 "nvme_error_stat": false, 00:06:09.121 "rdma_srq_size": 0, 00:06:09.121 "io_path_stat": false, 00:06:09.121 "allow_accel_sequence": false, 00:06:09.121 "rdma_max_cq_size": 0, 00:06:09.121 "rdma_cm_event_timeout_ms": 0, 00:06:09.121 "dhchap_digests": [ 00:06:09.121 "sha256", 00:06:09.121 "sha384", 00:06:09.121 "sha512" 00:06:09.121 ], 00:06:09.121 "dhchap_dhgroups": [ 00:06:09.121 "null", 00:06:09.121 "ffdhe2048", 00:06:09.121 "ffdhe3072", 00:06:09.121 "ffdhe4096", 00:06:09.121 "ffdhe6144", 00:06:09.121 "ffdhe8192" 00:06:09.121 ] 00:06:09.121 } 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "method": "bdev_nvme_set_hotplug", 00:06:09.121 "params": { 00:06:09.121 "period_us": 100000, 00:06:09.121 "enable": false 00:06:09.121 } 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "method": "bdev_wait_for_examine" 00:06:09.121 } 00:06:09.121 ] 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "subsystem": "scsi", 00:06:09.121 "config": null 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "subsystem": "scheduler", 00:06:09.121 "config": [ 00:06:09.121 { 00:06:09.121 "method": "framework_set_scheduler", 00:06:09.121 "params": { 00:06:09.121 "name": "static" 00:06:09.121 } 00:06:09.121 } 00:06:09.121 ] 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "subsystem": "vhost_scsi", 00:06:09.121 "config": [] 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "subsystem": "vhost_blk", 00:06:09.121 "config": [] 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "subsystem": "ublk", 00:06:09.121 "config": [] 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "subsystem": "nbd", 00:06:09.121 "config": [] 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "subsystem": "nvmf", 00:06:09.121 "config": [ 00:06:09.121 { 00:06:09.121 "method": "nvmf_set_config", 00:06:09.121 "params": { 00:06:09.121 "discovery_filter": "match_any", 00:06:09.121 "admin_cmd_passthru": { 00:06:09.121 "identify_ctrlr": false 00:06:09.121 } 00:06:09.121 } 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "method": "nvmf_set_max_subsystems", 00:06:09.121 "params": { 00:06:09.121 "max_subsystems": 1024 00:06:09.121 } 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "method": "nvmf_set_crdt", 00:06:09.121 "params": { 00:06:09.121 "crdt1": 0, 00:06:09.121 "crdt2": 0, 00:06:09.121 "crdt3": 0 00:06:09.121 } 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "method": "nvmf_create_transport", 00:06:09.121 "params": { 00:06:09.121 "trtype": "TCP", 00:06:09.121 "max_queue_depth": 128, 00:06:09.121 "max_io_qpairs_per_ctrlr": 127, 00:06:09.121 "in_capsule_data_size": 4096, 00:06:09.121 "max_io_size": 131072, 00:06:09.121 "io_unit_size": 131072, 00:06:09.121 "max_aq_depth": 128, 00:06:09.121 "num_shared_buffers": 511, 00:06:09.121 "buf_cache_size": 4294967295, 00:06:09.121 "dif_insert_or_strip": false, 00:06:09.121 "zcopy": false, 00:06:09.121 "c2h_success": true, 00:06:09.121 "sock_priority": 0, 00:06:09.121 "abort_timeout_sec": 1, 00:06:09.121 "ack_timeout": 0, 00:06:09.121 "data_wr_pool_size": 0 00:06:09.121 } 00:06:09.121 } 00:06:09.121 ] 00:06:09.121 }, 00:06:09.121 { 00:06:09.121 "subsystem": "iscsi", 00:06:09.121 "config": [ 00:06:09.121 { 00:06:09.121 "method": "iscsi_set_options", 00:06:09.121 "params": { 00:06:09.121 "node_base": "iqn.2016-06.io.spdk", 00:06:09.121 "max_sessions": 128, 00:06:09.121 "max_connections_per_session": 2, 00:06:09.121 "max_queue_depth": 64, 00:06:09.121 "default_time2wait": 2, 00:06:09.121 "default_time2retain": 20, 00:06:09.121 "first_burst_length": 8192, 00:06:09.121 "immediate_data": true, 00:06:09.121 "allow_duplicated_isid": false, 00:06:09.121 "error_recovery_level": 0, 00:06:09.121 "nop_timeout": 60, 00:06:09.121 "nop_in_interval": 30, 00:06:09.121 "disable_chap": false, 00:06:09.121 "require_chap": false, 00:06:09.121 "mutual_chap": false, 00:06:09.121 "chap_group": 0, 00:06:09.121 "max_large_datain_per_connection": 64, 00:06:09.121 "max_r2t_per_connection": 4, 00:06:09.121 "pdu_pool_size": 36864, 00:06:09.121 "immediate_data_pool_size": 16384, 00:06:09.121 "data_out_pool_size": 2048 00:06:09.121 } 00:06:09.121 } 00:06:09.121 ] 00:06:09.121 } 00:06:09.121 ] 00:06:09.121 } 00:06:09.121 05:26:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:09.121 05:26:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1495565 00:06:09.121 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1495565 ']' 00:06:09.121 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1495565 00:06:09.121 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:09.121 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.121 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1495565 00:06:09.121 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.121 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.121 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1495565' 00:06:09.121 killing process with pid 1495565 00:06:09.121 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1495565 00:06:09.121 05:26:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1495565 00:06:09.688 05:26:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1495705 00:06:09.688 05:26:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:09.688 05:26:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1495705 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1495705 ']' 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1495705 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1495705 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1495705' 00:06:14.950 killing process with pid 1495705 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1495705 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1495705 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:14.950 00:06:14.950 real 0m6.475s 00:06:14.950 user 0m6.053s 00:06:14.950 sys 0m0.704s 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.950 ************************************ 00:06:14.950 END TEST skip_rpc_with_json 00:06:14.950 ************************************ 00:06:14.950 05:26:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:14.950 05:26:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.950 05:26:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.950 05:26:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.950 ************************************ 00:06:14.950 START TEST skip_rpc_with_delay 00:06:14.950 ************************************ 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:14.950 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:15.208 [2024-07-25 05:26:08.654154] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:15.209 [2024-07-25 05:26:08.654300] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:15.209 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:15.209 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.209 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.209 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.209 00:06:15.209 real 0m0.068s 00:06:15.209 user 0m0.039s 00:06:15.209 sys 0m0.028s 00:06:15.209 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.209 05:26:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:15.209 ************************************ 00:06:15.209 END TEST skip_rpc_with_delay 00:06:15.209 ************************************ 00:06:15.209 05:26:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:15.209 05:26:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:15.209 05:26:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:15.209 05:26:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.209 05:26:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.209 05:26:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.209 ************************************ 00:06:15.209 START TEST exit_on_failed_rpc_init 00:06:15.209 ************************************ 00:06:15.209 05:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:15.209 05:26:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1496417 00:06:15.209 05:26:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.209 05:26:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1496417 00:06:15.209 05:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1496417 ']' 00:06:15.209 05:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.209 05:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.209 05:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.209 05:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.209 05:26:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:15.209 [2024-07-25 05:26:08.770926] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:15.209 [2024-07-25 05:26:08.771011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496417 ] 00:06:15.209 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.209 [2024-07-25 05:26:08.827359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.467 [2024-07-25 05:26:08.917145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.467 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.467 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:15.467 05:26:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.467 05:26:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.726 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:15.726 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.726 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.726 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.726 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.726 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.726 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.726 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.726 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.726 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:15.726 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.726 [2024-07-25 05:26:09.225335] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:15.726 [2024-07-25 05:26:09.225420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496429 ] 00:06:15.726 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.726 [2024-07-25 05:26:09.286587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.726 [2024-07-25 05:26:09.379953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.726 [2024-07-25 05:26:09.380084] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:15.726 [2024-07-25 05:26:09.380106] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:15.726 [2024-07-25 05:26:09.380121] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1496417 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1496417 ']' 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1496417 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1496417 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1496417' 00:06:15.984 killing process with pid 1496417 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1496417 00:06:15.984 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1496417 00:06:16.242 00:06:16.242 real 0m1.198s 00:06:16.242 user 0m1.298s 00:06:16.242 sys 0m0.462s 00:06:16.242 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.242 05:26:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:16.242 ************************************ 00:06:16.242 END TEST exit_on_failed_rpc_init 00:06:16.242 ************************************ 00:06:16.242 05:26:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.242 00:06:16.242 real 0m13.429s 00:06:16.242 user 0m12.625s 00:06:16.242 sys 0m1.666s 00:06:16.242 05:26:09 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.242 05:26:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.242 ************************************ 00:06:16.242 END TEST skip_rpc 00:06:16.242 ************************************ 00:06:16.501 05:26:09 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:16.501 05:26:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.501 05:26:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.501 05:26:09 -- common/autotest_common.sh@10 -- # set +x 00:06:16.501 ************************************ 00:06:16.501 START TEST rpc_client 00:06:16.501 ************************************ 00:06:16.501 05:26:09 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:16.501 * Looking for test storage... 00:06:16.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:16.501 05:26:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:16.501 OK 00:06:16.501 05:26:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:16.501 00:06:16.501 real 0m0.069s 00:06:16.501 user 0m0.028s 00:06:16.501 sys 0m0.045s 00:06:16.501 05:26:10 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.501 05:26:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:16.501 ************************************ 00:06:16.501 END TEST rpc_client 00:06:16.501 ************************************ 00:06:16.501 05:26:10 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:16.501 05:26:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.501 05:26:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.501 05:26:10 -- common/autotest_common.sh@10 -- # set +x 00:06:16.501 ************************************ 00:06:16.501 START TEST json_config 00:06:16.501 ************************************ 00:06:16.501 05:26:10 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.501 05:26:10 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.501 05:26:10 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.501 05:26:10 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.501 05:26:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.501 05:26:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.501 05:26:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.501 05:26:10 json_config -- paths/export.sh@5 -- # export PATH 00:06:16.501 05:26:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@47 -- # : 0 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.501 05:26:10 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:16.501 INFO: JSON configuration test init 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:16.501 05:26:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.501 05:26:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:16.501 05:26:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.501 05:26:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.501 05:26:10 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:16.501 05:26:10 json_config -- json_config/common.sh@9 -- # local app=target 00:06:16.501 05:26:10 json_config -- json_config/common.sh@10 -- # shift 00:06:16.501 05:26:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:16.501 05:26:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:16.501 05:26:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:16.501 05:26:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.501 05:26:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.501 05:26:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1496673 00:06:16.501 05:26:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:16.501 05:26:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:16.501 Waiting for target to run... 00:06:16.501 05:26:10 json_config -- json_config/common.sh@25 -- # waitforlisten 1496673 /var/tmp/spdk_tgt.sock 00:06:16.501 05:26:10 json_config -- common/autotest_common.sh@831 -- # '[' -z 1496673 ']' 00:06:16.501 05:26:10 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:16.501 05:26:10 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.501 05:26:10 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:16.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:16.501 05:26:10 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.501 05:26:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.762 [2024-07-25 05:26:10.210494] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:16.762 [2024-07-25 05:26:10.210592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496673 ] 00:06:16.762 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.025 [2024-07-25 05:26:10.565391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.025 [2024-07-25 05:26:10.629004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.590 05:26:11 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.590 05:26:11 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:17.590 05:26:11 json_config -- json_config/common.sh@26 -- # echo '' 00:06:17.590 00:06:17.590 05:26:11 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:17.590 05:26:11 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:17.590 05:26:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.590 05:26:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.590 05:26:11 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:17.590 05:26:11 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:17.590 05:26:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.590 05:26:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.590 05:26:11 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:17.590 05:26:11 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:17.590 05:26:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:20.869 05:26:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.869 05:26:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:20.869 05:26:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@51 -- # sort 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:20.869 05:26:14 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:20.869 05:26:14 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:20.869 05:26:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.126 05:26:14 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:21.126 05:26:14 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:21.126 05:26:14 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:21.126 05:26:14 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:21.126 05:26:14 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:21.126 05:26:14 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:21.127 05:26:14 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:21.127 05:26:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:21.127 05:26:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.127 05:26:14 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:21.127 05:26:14 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:21.127 05:26:14 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:21.127 05:26:14 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:21.127 05:26:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:21.127 MallocForNvmf0 00:06:21.384 05:26:14 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:21.384 05:26:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:21.384 MallocForNvmf1 00:06:21.642 05:26:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:21.642 05:26:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:21.642 [2024-07-25 05:26:15.318703] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.642 05:26:15 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:21.642 05:26:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:21.900 05:26:15 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:21.900 05:26:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:22.158 05:26:15 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:22.158 05:26:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:22.415 05:26:16 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:22.415 05:26:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:22.673 [2024-07-25 05:26:16.293918] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:22.673 05:26:16 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:22.673 05:26:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.673 05:26:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.673 05:26:16 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:22.673 05:26:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.673 05:26:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.673 05:26:16 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:22.673 05:26:16 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:22.673 05:26:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:22.932 MallocBdevForConfigChangeCheck 00:06:22.932 05:26:16 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:22.932 05:26:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.932 05:26:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.932 05:26:16 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:22.932 05:26:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:23.498 05:26:16 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:23.498 INFO: shutting down applications... 00:06:23.498 05:26:16 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:23.498 05:26:16 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:23.498 05:26:16 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:23.498 05:26:16 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:25.396 Calling clear_iscsi_subsystem 00:06:25.396 Calling clear_nvmf_subsystem 00:06:25.396 Calling clear_nbd_subsystem 00:06:25.396 Calling clear_ublk_subsystem 00:06:25.396 Calling clear_vhost_blk_subsystem 00:06:25.396 Calling clear_vhost_scsi_subsystem 00:06:25.396 Calling clear_bdev_subsystem 00:06:25.396 05:26:18 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:25.396 05:26:18 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:25.396 05:26:18 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:25.396 05:26:18 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:25.396 05:26:18 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:25.396 05:26:18 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:25.396 05:26:19 json_config -- json_config/json_config.sh@349 -- # break 00:06:25.396 05:26:19 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:25.396 05:26:19 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:25.396 05:26:19 json_config -- json_config/common.sh@31 -- # local app=target 00:06:25.396 05:26:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:25.396 05:26:19 json_config -- json_config/common.sh@35 -- # [[ -n 1496673 ]] 00:06:25.396 05:26:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1496673 00:06:25.396 05:26:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:25.396 05:26:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.396 05:26:19 json_config -- json_config/common.sh@41 -- # kill -0 1496673 00:06:25.396 05:26:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:25.962 05:26:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:25.962 05:26:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.962 05:26:19 json_config -- json_config/common.sh@41 -- # kill -0 1496673 00:06:25.962 05:26:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:25.962 05:26:19 json_config -- json_config/common.sh@43 -- # break 00:06:25.962 05:26:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:25.962 05:26:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:25.962 SPDK target shutdown done 00:06:25.962 05:26:19 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:25.962 INFO: relaunching applications... 00:06:25.962 05:26:19 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:25.962 05:26:19 json_config -- json_config/common.sh@9 -- # local app=target 00:06:25.962 05:26:19 json_config -- json_config/common.sh@10 -- # shift 00:06:25.962 05:26:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.962 05:26:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.962 05:26:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.962 05:26:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.962 05:26:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.962 05:26:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1497986 00:06:25.962 05:26:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.962 Waiting for target to run... 00:06:25.962 05:26:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:25.962 05:26:19 json_config -- json_config/common.sh@25 -- # waitforlisten 1497986 /var/tmp/spdk_tgt.sock 00:06:25.962 05:26:19 json_config -- common/autotest_common.sh@831 -- # '[' -z 1497986 ']' 00:06:25.962 05:26:19 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.962 05:26:19 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.962 05:26:19 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.962 05:26:19 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.962 05:26:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.962 [2024-07-25 05:26:19.620671] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:25.962 [2024-07-25 05:26:19.620776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497986 ] 00:06:25.962 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.527 [2024-07-25 05:26:20.141106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.527 [2024-07-25 05:26:20.223564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.807 [2024-07-25 05:26:23.260163] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.807 [2024-07-25 05:26:23.292620] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:30.371 05:26:24 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.371 05:26:24 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:30.371 05:26:24 json_config -- json_config/common.sh@26 -- # echo '' 00:06:30.371 00:06:30.371 05:26:24 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:30.371 05:26:24 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:30.371 INFO: Checking if target configuration is the same... 00:06:30.371 05:26:24 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.371 05:26:24 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:30.371 05:26:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:30.371 + '[' 2 -ne 2 ']' 00:06:30.371 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:30.371 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:30.371 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:30.371 +++ basename /dev/fd/62 00:06:30.371 ++ mktemp /tmp/62.XXX 00:06:30.371 + tmp_file_1=/tmp/62.QUV 00:06:30.371 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.371 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:30.371 + tmp_file_2=/tmp/spdk_tgt_config.json.TxD 00:06:30.371 + ret=0 00:06:30.371 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:30.937 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:30.937 + diff -u /tmp/62.QUV /tmp/spdk_tgt_config.json.TxD 00:06:30.937 + echo 'INFO: JSON config files are the same' 00:06:30.937 INFO: JSON config files are the same 00:06:30.937 + rm /tmp/62.QUV /tmp/spdk_tgt_config.json.TxD 00:06:30.937 + exit 0 00:06:30.937 05:26:24 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:30.937 05:26:24 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:30.937 INFO: changing configuration and checking if this can be detected... 00:06:30.937 05:26:24 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:30.937 05:26:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:31.195 05:26:24 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:31.195 05:26:24 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:31.195 05:26:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:31.195 + '[' 2 -ne 2 ']' 00:06:31.195 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:31.195 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:31.195 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:31.195 +++ basename /dev/fd/62 00:06:31.195 ++ mktemp /tmp/62.XXX 00:06:31.195 + tmp_file_1=/tmp/62.Abu 00:06:31.195 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:31.195 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:31.195 + tmp_file_2=/tmp/spdk_tgt_config.json.PcU 00:06:31.195 + ret=0 00:06:31.195 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:31.453 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:31.453 + diff -u /tmp/62.Abu /tmp/spdk_tgt_config.json.PcU 00:06:31.453 + ret=1 00:06:31.453 + echo '=== Start of file: /tmp/62.Abu ===' 00:06:31.453 + cat /tmp/62.Abu 00:06:31.453 + echo '=== End of file: /tmp/62.Abu ===' 00:06:31.453 + echo '' 00:06:31.453 + echo '=== Start of file: /tmp/spdk_tgt_config.json.PcU ===' 00:06:31.453 + cat /tmp/spdk_tgt_config.json.PcU 00:06:31.453 + echo '=== End of file: /tmp/spdk_tgt_config.json.PcU ===' 00:06:31.453 + echo '' 00:06:31.453 + rm /tmp/62.Abu /tmp/spdk_tgt_config.json.PcU 00:06:31.453 + exit 1 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:31.453 INFO: configuration change detected. 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:31.453 05:26:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.453 05:26:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@321 -- # [[ -n 1497986 ]] 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:31.453 05:26:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.453 05:26:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:31.453 05:26:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.453 05:26:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.453 05:26:25 json_config -- json_config/json_config.sh@327 -- # killprocess 1497986 00:06:31.453 05:26:25 json_config -- common/autotest_common.sh@950 -- # '[' -z 1497986 ']' 00:06:31.453 05:26:25 json_config -- common/autotest_common.sh@954 -- # kill -0 1497986 00:06:31.453 05:26:25 json_config -- common/autotest_common.sh@955 -- # uname 00:06:31.711 05:26:25 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.711 05:26:25 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1497986 00:06:31.711 05:26:25 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.711 05:26:25 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.711 05:26:25 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1497986' 00:06:31.711 killing process with pid 1497986 00:06:31.711 05:26:25 json_config -- common/autotest_common.sh@969 -- # kill 1497986 00:06:31.711 05:26:25 json_config -- common/autotest_common.sh@974 -- # wait 1497986 00:06:33.085 05:26:26 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.085 05:26:26 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:33.085 05:26:26 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:33.085 05:26:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.085 05:26:26 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:33.085 05:26:26 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:33.085 INFO: Success 00:06:33.085 00:06:33.085 real 0m16.683s 00:06:33.085 user 0m18.539s 00:06:33.085 sys 0m2.049s 00:06:33.085 05:26:26 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.345 05:26:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.345 ************************************ 00:06:33.345 END TEST json_config 00:06:33.345 ************************************ 00:06:33.345 05:26:26 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:33.345 05:26:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.345 05:26:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.345 05:26:26 -- common/autotest_common.sh@10 -- # set +x 00:06:33.345 ************************************ 00:06:33.345 START TEST json_config_extra_key 00:06:33.345 ************************************ 00:06:33.345 05:26:26 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:33.345 05:26:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.345 05:26:26 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.345 05:26:26 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.345 05:26:26 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.345 05:26:26 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.345 05:26:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.345 05:26:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.345 05:26:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.345 05:26:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:33.345 05:26:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.346 05:26:26 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:33.346 05:26:26 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.346 05:26:26 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.346 05:26:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.346 05:26:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.346 05:26:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.346 05:26:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.346 05:26:26 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.346 05:26:26 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.346 05:26:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:33.346 05:26:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:33.346 05:26:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:33.346 05:26:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:33.346 05:26:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:33.346 05:26:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:33.346 05:26:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:33.346 05:26:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:33.346 05:26:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:33.346 05:26:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:33.346 05:26:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:33.346 INFO: launching applications... 00:06:33.346 05:26:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:33.346 05:26:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:33.346 05:26:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:33.346 05:26:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:33.346 05:26:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:33.346 05:26:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:33.346 05:26:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.346 05:26:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.346 05:26:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1498907 00:06:33.346 05:26:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:33.346 05:26:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:33.346 Waiting for target to run... 00:06:33.346 05:26:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1498907 /var/tmp/spdk_tgt.sock 00:06:33.346 05:26:26 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1498907 ']' 00:06:33.346 05:26:26 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:33.346 05:26:26 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.346 05:26:26 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:33.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:33.346 05:26:26 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.346 05:26:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:33.346 [2024-07-25 05:26:26.939156] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:33.346 [2024-07-25 05:26:26.939272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498907 ] 00:06:33.346 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.604 [2024-07-25 05:26:27.286470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.862 [2024-07-25 05:26:27.350477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.429 05:26:27 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.429 05:26:27 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:34.429 05:26:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:34.429 00:06:34.429 05:26:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:34.429 INFO: shutting down applications... 00:06:34.429 05:26:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:34.429 05:26:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:34.429 05:26:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:34.429 05:26:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1498907 ]] 00:06:34.429 05:26:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1498907 00:06:34.429 05:26:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:34.429 05:26:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.429 05:26:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1498907 00:06:34.429 05:26:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:34.996 05:26:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:34.996 05:26:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.996 05:26:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1498907 00:06:34.996 05:26:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:34.996 05:26:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:34.996 05:26:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:34.996 05:26:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:34.996 SPDK target shutdown done 00:06:34.996 05:26:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:34.996 Success 00:06:34.996 00:06:34.996 real 0m1.567s 00:06:34.996 user 0m1.533s 00:06:34.996 sys 0m0.443s 00:06:34.996 05:26:28 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.996 05:26:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:34.996 ************************************ 00:06:34.996 END TEST json_config_extra_key 00:06:34.996 ************************************ 00:06:34.996 05:26:28 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:34.996 05:26:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.996 05:26:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.996 05:26:28 -- common/autotest_common.sh@10 -- # set +x 00:06:34.996 ************************************ 00:06:34.996 START TEST alias_rpc 00:06:34.996 ************************************ 00:06:34.996 05:26:28 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:34.996 * Looking for test storage... 00:06:34.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:34.996 05:26:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:34.996 05:26:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1499119 00:06:34.996 05:26:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:34.996 05:26:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1499119 00:06:34.996 05:26:28 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1499119 ']' 00:06:34.996 05:26:28 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.996 05:26:28 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.996 05:26:28 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.996 05:26:28 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.996 05:26:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.996 [2024-07-25 05:26:28.544477] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:34.996 [2024-07-25 05:26:28.544620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499119 ] 00:06:34.996 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.996 [2024-07-25 05:26:28.601491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.996 [2024-07-25 05:26:28.685199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.254 05:26:28 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.254 05:26:28 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:35.254 05:26:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:35.819 05:26:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1499119 00:06:35.820 05:26:29 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1499119 ']' 00:06:35.820 05:26:29 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1499119 00:06:35.820 05:26:29 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:35.820 05:26:29 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.820 05:26:29 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1499119 00:06:35.820 05:26:29 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.820 05:26:29 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.820 05:26:29 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1499119' 00:06:35.820 killing process with pid 1499119 00:06:35.820 05:26:29 alias_rpc -- common/autotest_common.sh@969 -- # kill 1499119 00:06:35.820 05:26:29 alias_rpc -- common/autotest_common.sh@974 -- # wait 1499119 00:06:36.077 00:06:36.077 real 0m1.238s 00:06:36.077 user 0m1.349s 00:06:36.077 sys 0m0.434s 00:06:36.077 05:26:29 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.077 05:26:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.077 ************************************ 00:06:36.077 END TEST alias_rpc 00:06:36.077 ************************************ 00:06:36.077 05:26:29 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:36.077 05:26:29 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:36.077 05:26:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.077 05:26:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.077 05:26:29 -- common/autotest_common.sh@10 -- # set +x 00:06:36.077 ************************************ 00:06:36.077 START TEST spdkcli_tcp 00:06:36.077 ************************************ 00:06:36.077 05:26:29 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:36.077 * Looking for test storage... 00:06:36.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:36.335 05:26:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:36.335 05:26:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:36.335 05:26:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:36.335 05:26:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:36.335 05:26:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:36.335 05:26:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:36.335 05:26:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:36.335 05:26:29 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:36.335 05:26:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.335 05:26:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1499402 00:06:36.335 05:26:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:36.335 05:26:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1499402 00:06:36.335 05:26:29 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1499402 ']' 00:06:36.335 05:26:29 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.335 05:26:29 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.335 05:26:29 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.335 05:26:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.335 05:26:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.335 [2024-07-25 05:26:29.838848] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:36.335 [2024-07-25 05:26:29.838932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499402 ] 00:06:36.335 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.335 [2024-07-25 05:26:29.896057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.335 [2024-07-25 05:26:29.980778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.335 [2024-07-25 05:26:29.980782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.593 05:26:30 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.593 05:26:30 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:36.593 05:26:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1499414 00:06:36.593 05:26:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:36.593 05:26:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:36.851 [ 00:06:36.851 "bdev_malloc_delete", 00:06:36.851 "bdev_malloc_create", 00:06:36.851 "bdev_null_resize", 00:06:36.851 "bdev_null_delete", 00:06:36.851 "bdev_null_create", 00:06:36.851 "bdev_nvme_cuse_unregister", 00:06:36.851 "bdev_nvme_cuse_register", 00:06:36.851 "bdev_opal_new_user", 00:06:36.851 "bdev_opal_set_lock_state", 00:06:36.851 "bdev_opal_delete", 00:06:36.851 "bdev_opal_get_info", 00:06:36.851 "bdev_opal_create", 00:06:36.851 "bdev_nvme_opal_revert", 00:06:36.851 "bdev_nvme_opal_init", 00:06:36.851 "bdev_nvme_send_cmd", 00:06:36.851 "bdev_nvme_get_path_iostat", 00:06:36.851 "bdev_nvme_get_mdns_discovery_info", 00:06:36.851 "bdev_nvme_stop_mdns_discovery", 00:06:36.851 "bdev_nvme_start_mdns_discovery", 00:06:36.851 "bdev_nvme_set_multipath_policy", 00:06:36.851 "bdev_nvme_set_preferred_path", 00:06:36.851 "bdev_nvme_get_io_paths", 00:06:36.851 "bdev_nvme_remove_error_injection", 00:06:36.851 "bdev_nvme_add_error_injection", 00:06:36.851 "bdev_nvme_get_discovery_info", 00:06:36.851 "bdev_nvme_stop_discovery", 00:06:36.851 "bdev_nvme_start_discovery", 00:06:36.851 "bdev_nvme_get_controller_health_info", 00:06:36.851 "bdev_nvme_disable_controller", 00:06:36.851 "bdev_nvme_enable_controller", 00:06:36.851 "bdev_nvme_reset_controller", 00:06:36.851 "bdev_nvme_get_transport_statistics", 00:06:36.851 "bdev_nvme_apply_firmware", 00:06:36.851 "bdev_nvme_detach_controller", 00:06:36.851 "bdev_nvme_get_controllers", 00:06:36.851 "bdev_nvme_attach_controller", 00:06:36.851 "bdev_nvme_set_hotplug", 00:06:36.851 "bdev_nvme_set_options", 00:06:36.851 "bdev_passthru_delete", 00:06:36.851 "bdev_passthru_create", 00:06:36.851 "bdev_lvol_set_parent_bdev", 00:06:36.851 "bdev_lvol_set_parent", 00:06:36.851 "bdev_lvol_check_shallow_copy", 00:06:36.851 "bdev_lvol_start_shallow_copy", 00:06:36.851 "bdev_lvol_grow_lvstore", 00:06:36.851 "bdev_lvol_get_lvols", 00:06:36.851 "bdev_lvol_get_lvstores", 00:06:36.851 "bdev_lvol_delete", 00:06:36.851 "bdev_lvol_set_read_only", 00:06:36.851 "bdev_lvol_resize", 00:06:36.851 "bdev_lvol_decouple_parent", 00:06:36.851 "bdev_lvol_inflate", 00:06:36.851 "bdev_lvol_rename", 00:06:36.851 "bdev_lvol_clone_bdev", 00:06:36.851 "bdev_lvol_clone", 00:06:36.851 "bdev_lvol_snapshot", 00:06:36.851 "bdev_lvol_create", 00:06:36.851 "bdev_lvol_delete_lvstore", 00:06:36.851 "bdev_lvol_rename_lvstore", 00:06:36.851 "bdev_lvol_create_lvstore", 00:06:36.851 "bdev_raid_set_options", 00:06:36.851 "bdev_raid_remove_base_bdev", 00:06:36.851 "bdev_raid_add_base_bdev", 00:06:36.851 "bdev_raid_delete", 00:06:36.851 "bdev_raid_create", 00:06:36.852 "bdev_raid_get_bdevs", 00:06:36.852 "bdev_error_inject_error", 00:06:36.852 "bdev_error_delete", 00:06:36.852 "bdev_error_create", 00:06:36.852 "bdev_split_delete", 00:06:36.852 "bdev_split_create", 00:06:36.852 "bdev_delay_delete", 00:06:36.852 "bdev_delay_create", 00:06:36.852 "bdev_delay_update_latency", 00:06:36.852 "bdev_zone_block_delete", 00:06:36.852 "bdev_zone_block_create", 00:06:36.852 "blobfs_create", 00:06:36.852 "blobfs_detect", 00:06:36.852 "blobfs_set_cache_size", 00:06:36.852 "bdev_aio_delete", 00:06:36.852 "bdev_aio_rescan", 00:06:36.852 "bdev_aio_create", 00:06:36.852 "bdev_ftl_set_property", 00:06:36.852 "bdev_ftl_get_properties", 00:06:36.852 "bdev_ftl_get_stats", 00:06:36.852 "bdev_ftl_unmap", 00:06:36.852 "bdev_ftl_unload", 00:06:36.852 "bdev_ftl_delete", 00:06:36.852 "bdev_ftl_load", 00:06:36.852 "bdev_ftl_create", 00:06:36.852 "bdev_virtio_attach_controller", 00:06:36.852 "bdev_virtio_scsi_get_devices", 00:06:36.852 "bdev_virtio_detach_controller", 00:06:36.852 "bdev_virtio_blk_set_hotplug", 00:06:36.852 "bdev_iscsi_delete", 00:06:36.852 "bdev_iscsi_create", 00:06:36.852 "bdev_iscsi_set_options", 00:06:36.852 "accel_error_inject_error", 00:06:36.852 "ioat_scan_accel_module", 00:06:36.852 "dsa_scan_accel_module", 00:06:36.852 "iaa_scan_accel_module", 00:06:36.852 "vfu_virtio_create_scsi_endpoint", 00:06:36.852 "vfu_virtio_scsi_remove_target", 00:06:36.852 "vfu_virtio_scsi_add_target", 00:06:36.852 "vfu_virtio_create_blk_endpoint", 00:06:36.852 "vfu_virtio_delete_endpoint", 00:06:36.852 "keyring_file_remove_key", 00:06:36.852 "keyring_file_add_key", 00:06:36.852 "keyring_linux_set_options", 00:06:36.852 "iscsi_get_histogram", 00:06:36.852 "iscsi_enable_histogram", 00:06:36.852 "iscsi_set_options", 00:06:36.852 "iscsi_get_auth_groups", 00:06:36.852 "iscsi_auth_group_remove_secret", 00:06:36.852 "iscsi_auth_group_add_secret", 00:06:36.852 "iscsi_delete_auth_group", 00:06:36.852 "iscsi_create_auth_group", 00:06:36.852 "iscsi_set_discovery_auth", 00:06:36.852 "iscsi_get_options", 00:06:36.852 "iscsi_target_node_request_logout", 00:06:36.852 "iscsi_target_node_set_redirect", 00:06:36.852 "iscsi_target_node_set_auth", 00:06:36.852 "iscsi_target_node_add_lun", 00:06:36.852 "iscsi_get_stats", 00:06:36.852 "iscsi_get_connections", 00:06:36.852 "iscsi_portal_group_set_auth", 00:06:36.852 "iscsi_start_portal_group", 00:06:36.852 "iscsi_delete_portal_group", 00:06:36.852 "iscsi_create_portal_group", 00:06:36.852 "iscsi_get_portal_groups", 00:06:36.852 "iscsi_delete_target_node", 00:06:36.852 "iscsi_target_node_remove_pg_ig_maps", 00:06:36.852 "iscsi_target_node_add_pg_ig_maps", 00:06:36.852 "iscsi_create_target_node", 00:06:36.852 "iscsi_get_target_nodes", 00:06:36.852 "iscsi_delete_initiator_group", 00:06:36.852 "iscsi_initiator_group_remove_initiators", 00:06:36.852 "iscsi_initiator_group_add_initiators", 00:06:36.852 "iscsi_create_initiator_group", 00:06:36.852 "iscsi_get_initiator_groups", 00:06:36.852 "nvmf_set_crdt", 00:06:36.852 "nvmf_set_config", 00:06:36.852 "nvmf_set_max_subsystems", 00:06:36.852 "nvmf_stop_mdns_prr", 00:06:36.852 "nvmf_publish_mdns_prr", 00:06:36.852 "nvmf_subsystem_get_listeners", 00:06:36.852 "nvmf_subsystem_get_qpairs", 00:06:36.852 "nvmf_subsystem_get_controllers", 00:06:36.852 "nvmf_get_stats", 00:06:36.852 "nvmf_get_transports", 00:06:36.852 "nvmf_create_transport", 00:06:36.852 "nvmf_get_targets", 00:06:36.852 "nvmf_delete_target", 00:06:36.852 "nvmf_create_target", 00:06:36.852 "nvmf_subsystem_allow_any_host", 00:06:36.852 "nvmf_subsystem_remove_host", 00:06:36.852 "nvmf_subsystem_add_host", 00:06:36.852 "nvmf_ns_remove_host", 00:06:36.852 "nvmf_ns_add_host", 00:06:36.852 "nvmf_subsystem_remove_ns", 00:06:36.852 "nvmf_subsystem_add_ns", 00:06:36.852 "nvmf_subsystem_listener_set_ana_state", 00:06:36.852 "nvmf_discovery_get_referrals", 00:06:36.852 "nvmf_discovery_remove_referral", 00:06:36.852 "nvmf_discovery_add_referral", 00:06:36.852 "nvmf_subsystem_remove_listener", 00:06:36.852 "nvmf_subsystem_add_listener", 00:06:36.852 "nvmf_delete_subsystem", 00:06:36.852 "nvmf_create_subsystem", 00:06:36.852 "nvmf_get_subsystems", 00:06:36.852 "env_dpdk_get_mem_stats", 00:06:36.852 "nbd_get_disks", 00:06:36.852 "nbd_stop_disk", 00:06:36.852 "nbd_start_disk", 00:06:36.852 "ublk_recover_disk", 00:06:36.852 "ublk_get_disks", 00:06:36.852 "ublk_stop_disk", 00:06:36.852 "ublk_start_disk", 00:06:36.852 "ublk_destroy_target", 00:06:36.852 "ublk_create_target", 00:06:36.852 "virtio_blk_create_transport", 00:06:36.852 "virtio_blk_get_transports", 00:06:36.852 "vhost_controller_set_coalescing", 00:06:36.852 "vhost_get_controllers", 00:06:36.852 "vhost_delete_controller", 00:06:36.852 "vhost_create_blk_controller", 00:06:36.852 "vhost_scsi_controller_remove_target", 00:06:36.852 "vhost_scsi_controller_add_target", 00:06:36.852 "vhost_start_scsi_controller", 00:06:36.852 "vhost_create_scsi_controller", 00:06:36.852 "thread_set_cpumask", 00:06:36.852 "framework_get_governor", 00:06:36.852 "framework_get_scheduler", 00:06:36.852 "framework_set_scheduler", 00:06:36.852 "framework_get_reactors", 00:06:36.852 "thread_get_io_channels", 00:06:36.852 "thread_get_pollers", 00:06:36.852 "thread_get_stats", 00:06:36.852 "framework_monitor_context_switch", 00:06:36.852 "spdk_kill_instance", 00:06:36.852 "log_enable_timestamps", 00:06:36.852 "log_get_flags", 00:06:36.852 "log_clear_flag", 00:06:36.852 "log_set_flag", 00:06:36.852 "log_get_level", 00:06:36.852 "log_set_level", 00:06:36.852 "log_get_print_level", 00:06:36.852 "log_set_print_level", 00:06:36.852 "framework_enable_cpumask_locks", 00:06:36.852 "framework_disable_cpumask_locks", 00:06:36.852 "framework_wait_init", 00:06:36.852 "framework_start_init", 00:06:36.852 "scsi_get_devices", 00:06:36.852 "bdev_get_histogram", 00:06:36.852 "bdev_enable_histogram", 00:06:36.852 "bdev_set_qos_limit", 00:06:36.852 "bdev_set_qd_sampling_period", 00:06:36.852 "bdev_get_bdevs", 00:06:36.852 "bdev_reset_iostat", 00:06:36.852 "bdev_get_iostat", 00:06:36.852 "bdev_examine", 00:06:36.852 "bdev_wait_for_examine", 00:06:36.852 "bdev_set_options", 00:06:36.852 "notify_get_notifications", 00:06:36.852 "notify_get_types", 00:06:36.852 "accel_get_stats", 00:06:36.852 "accel_set_options", 00:06:36.852 "accel_set_driver", 00:06:36.852 "accel_crypto_key_destroy", 00:06:36.852 "accel_crypto_keys_get", 00:06:36.852 "accel_crypto_key_create", 00:06:36.852 "accel_assign_opc", 00:06:36.852 "accel_get_module_info", 00:06:36.852 "accel_get_opc_assignments", 00:06:36.852 "vmd_rescan", 00:06:36.852 "vmd_remove_device", 00:06:36.852 "vmd_enable", 00:06:36.852 "sock_get_default_impl", 00:06:36.852 "sock_set_default_impl", 00:06:36.852 "sock_impl_set_options", 00:06:36.852 "sock_impl_get_options", 00:06:36.852 "iobuf_get_stats", 00:06:36.852 "iobuf_set_options", 00:06:36.852 "keyring_get_keys", 00:06:36.852 "framework_get_pci_devices", 00:06:36.852 "framework_get_config", 00:06:36.852 "framework_get_subsystems", 00:06:36.852 "vfu_tgt_set_base_path", 00:06:36.852 "trace_get_info", 00:06:36.852 "trace_get_tpoint_group_mask", 00:06:36.852 "trace_disable_tpoint_group", 00:06:36.852 "trace_enable_tpoint_group", 00:06:36.852 "trace_clear_tpoint_mask", 00:06:36.852 "trace_set_tpoint_mask", 00:06:36.852 "spdk_get_version", 00:06:36.852 "rpc_get_methods" 00:06:36.852 ] 00:06:36.852 05:26:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:36.852 05:26:30 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:36.852 05:26:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.853 05:26:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:36.853 05:26:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1499402 00:06:36.853 05:26:30 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1499402 ']' 00:06:36.853 05:26:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1499402 00:06:36.853 05:26:30 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:36.853 05:26:30 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.853 05:26:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1499402 00:06:36.853 05:26:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.853 05:26:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.853 05:26:30 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1499402' 00:06:36.853 killing process with pid 1499402 00:06:36.853 05:26:30 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1499402 00:06:36.853 05:26:30 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1499402 00:06:37.418 00:06:37.418 real 0m1.212s 00:06:37.418 user 0m2.150s 00:06:37.418 sys 0m0.448s 00:06:37.418 05:26:30 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.418 05:26:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.418 ************************************ 00:06:37.418 END TEST spdkcli_tcp 00:06:37.418 ************************************ 00:06:37.418 05:26:30 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:37.418 05:26:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.418 05:26:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.418 05:26:30 -- common/autotest_common.sh@10 -- # set +x 00:06:37.418 ************************************ 00:06:37.418 START TEST dpdk_mem_utility 00:06:37.418 ************************************ 00:06:37.418 05:26:30 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:37.418 * Looking for test storage... 00:06:37.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:37.418 05:26:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:37.418 05:26:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1499602 00:06:37.418 05:26:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.418 05:26:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1499602 00:06:37.418 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1499602 ']' 00:06:37.418 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.418 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.418 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.418 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.418 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:37.418 [2024-07-25 05:26:31.099047] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:37.418 [2024-07-25 05:26:31.099146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499602 ] 00:06:37.676 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.676 [2024-07-25 05:26:31.158687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.676 [2024-07-25 05:26:31.242071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.934 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.934 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:37.934 05:26:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:37.934 05:26:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:37.934 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.934 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:37.934 { 00:06:37.934 "filename": "/tmp/spdk_mem_dump.txt" 00:06:37.934 } 00:06:37.934 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.934 05:26:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:37.934 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:37.934 1 heaps totaling size 814.000000 MiB 00:06:37.934 size: 814.000000 MiB heap id: 0 00:06:37.934 end heaps---------- 00:06:37.934 8 mempools totaling size 598.116089 MiB 00:06:37.934 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:37.934 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:37.934 size: 84.521057 MiB name: bdev_io_1499602 00:06:37.934 size: 51.011292 MiB name: evtpool_1499602 00:06:37.934 size: 50.003479 MiB name: msgpool_1499602 00:06:37.934 size: 21.763794 MiB name: PDU_Pool 00:06:37.934 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:37.934 size: 0.026123 MiB name: Session_Pool 00:06:37.934 end mempools------- 00:06:37.934 6 memzones totaling size 4.142822 MiB 00:06:37.934 size: 1.000366 MiB name: RG_ring_0_1499602 00:06:37.934 size: 1.000366 MiB name: RG_ring_1_1499602 00:06:37.934 size: 1.000366 MiB name: RG_ring_4_1499602 00:06:37.934 size: 1.000366 MiB name: RG_ring_5_1499602 00:06:37.934 size: 0.125366 MiB name: RG_ring_2_1499602 00:06:37.934 size: 0.015991 MiB name: RG_ring_3_1499602 00:06:37.934 end memzones------- 00:06:37.934 05:26:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:37.934 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:37.934 list of free elements. size: 12.519348 MiB 00:06:37.934 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:37.934 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:37.934 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:37.934 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:37.934 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:37.934 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:37.934 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:37.934 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:37.934 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:37.934 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:37.934 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:37.934 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:37.934 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:37.934 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:37.934 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:37.934 list of standard malloc elements. size: 199.218079 MiB 00:06:37.934 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:37.934 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:37.934 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:37.934 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:37.934 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:37.934 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:37.934 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:37.934 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:37.934 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:37.934 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:37.934 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:37.934 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:37.934 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:37.934 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:37.934 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:37.934 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:37.934 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:37.934 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:37.934 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:37.934 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:37.934 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:37.934 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:37.935 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:37.935 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:37.935 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:37.935 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:37.935 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:37.935 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:37.935 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:37.935 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:37.935 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:37.935 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:37.935 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:37.935 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:37.935 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:37.935 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:37.935 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:37.935 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:37.935 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:37.935 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:37.935 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:37.935 list of memzone associated elements. size: 602.262573 MiB 00:06:37.935 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:37.935 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:37.935 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:37.935 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:37.935 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:37.935 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1499602_0 00:06:37.935 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:37.935 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1499602_0 00:06:37.935 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:37.935 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1499602_0 00:06:37.935 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:37.935 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:37.935 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:37.935 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:37.935 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:37.935 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1499602 00:06:37.935 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:37.935 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1499602 00:06:37.935 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:37.935 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1499602 00:06:37.935 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:37.935 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:37.935 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:37.935 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:37.935 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:37.935 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:37.935 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:37.935 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:37.935 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:37.935 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1499602 00:06:37.935 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:37.935 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1499602 00:06:37.935 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:37.935 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1499602 00:06:37.935 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:37.935 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1499602 00:06:37.935 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:37.935 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1499602 00:06:37.935 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:37.935 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:37.935 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:37.935 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:37.935 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:37.935 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:37.935 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:37.935 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1499602 00:06:37.935 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:37.935 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:37.935 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:37.935 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:37.935 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:37.935 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1499602 00:06:37.935 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:37.935 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:37.935 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:37.935 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1499602 00:06:37.935 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:37.935 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1499602 00:06:37.935 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:37.935 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:37.935 05:26:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:37.935 05:26:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1499602 00:06:37.935 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1499602 ']' 00:06:37.935 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1499602 00:06:37.935 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:37.935 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.935 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1499602 00:06:37.935 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.935 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.935 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1499602' 00:06:37.935 killing process with pid 1499602 00:06:37.935 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1499602 00:06:37.935 05:26:31 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1499602 00:06:38.500 00:06:38.500 real 0m1.034s 00:06:38.500 user 0m1.013s 00:06:38.500 sys 0m0.395s 00:06:38.500 05:26:32 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.500 05:26:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:38.500 ************************************ 00:06:38.500 END TEST dpdk_mem_utility 00:06:38.500 ************************************ 00:06:38.500 05:26:32 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:38.500 05:26:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.500 05:26:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.500 05:26:32 -- common/autotest_common.sh@10 -- # set +x 00:06:38.500 ************************************ 00:06:38.500 START TEST event 00:06:38.500 ************************************ 00:06:38.500 05:26:32 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:38.500 * Looking for test storage... 00:06:38.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:38.500 05:26:32 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:38.500 05:26:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:38.500 05:26:32 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:38.500 05:26:32 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:38.500 05:26:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.500 05:26:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.500 ************************************ 00:06:38.500 START TEST event_perf 00:06:38.500 ************************************ 00:06:38.500 05:26:32 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:38.500 Running I/O for 1 seconds...[2024-07-25 05:26:32.171025] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:38.500 [2024-07-25 05:26:32.171090] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499798 ] 00:06:38.500 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.758 [2024-07-25 05:26:32.233575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.758 [2024-07-25 05:26:32.326093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.758 [2024-07-25 05:26:32.326160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.758 [2024-07-25 05:26:32.326263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.758 [2024-07-25 05:26:32.326267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.130 Running I/O for 1 seconds... 00:06:40.130 lcore 0: 237801 00:06:40.130 lcore 1: 237801 00:06:40.130 lcore 2: 237802 00:06:40.130 lcore 3: 237800 00:06:40.130 done. 00:06:40.130 00:06:40.130 real 0m1.252s 00:06:40.130 user 0m4.163s 00:06:40.130 sys 0m0.083s 00:06:40.130 05:26:33 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.130 05:26:33 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.130 ************************************ 00:06:40.130 END TEST event_perf 00:06:40.130 ************************************ 00:06:40.130 05:26:33 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:40.130 05:26:33 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:40.130 05:26:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.130 05:26:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.130 ************************************ 00:06:40.130 START TEST event_reactor 00:06:40.130 ************************************ 00:06:40.130 05:26:33 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:40.130 [2024-07-25 05:26:33.472774] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:40.130 [2024-07-25 05:26:33.472842] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499953 ] 00:06:40.130 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.130 [2024-07-25 05:26:33.533944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.130 [2024-07-25 05:26:33.626534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.063 test_start 00:06:41.063 oneshot 00:06:41.063 tick 100 00:06:41.063 tick 100 00:06:41.063 tick 250 00:06:41.063 tick 100 00:06:41.063 tick 100 00:06:41.063 tick 100 00:06:41.063 tick 250 00:06:41.063 tick 500 00:06:41.063 tick 100 00:06:41.063 tick 100 00:06:41.063 tick 250 00:06:41.063 tick 100 00:06:41.063 tick 100 00:06:41.063 test_end 00:06:41.063 00:06:41.063 real 0m1.249s 00:06:41.063 user 0m1.155s 00:06:41.063 sys 0m0.090s 00:06:41.063 05:26:34 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.063 05:26:34 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:41.063 ************************************ 00:06:41.063 END TEST event_reactor 00:06:41.063 ************************************ 00:06:41.063 05:26:34 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:41.063 05:26:34 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:41.063 05:26:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.063 05:26:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.063 ************************************ 00:06:41.063 START TEST event_reactor_perf 00:06:41.063 ************************************ 00:06:41.063 05:26:34 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:41.321 [2024-07-25 05:26:34.772477] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:41.321 [2024-07-25 05:26:34.772541] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500111 ] 00:06:41.321 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.321 [2024-07-25 05:26:34.831927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.321 [2024-07-25 05:26:34.924751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.692 test_start 00:06:42.692 test_end 00:06:42.692 Performance: 359645 events per second 00:06:42.692 00:06:42.692 real 0m1.240s 00:06:42.692 user 0m1.152s 00:06:42.692 sys 0m0.083s 00:06:42.692 05:26:35 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.692 05:26:35 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.692 ************************************ 00:06:42.692 END TEST event_reactor_perf 00:06:42.692 ************************************ 00:06:42.692 05:26:36 event -- event/event.sh@49 -- # uname -s 00:06:42.692 05:26:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:42.692 05:26:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:42.692 05:26:36 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.692 05:26:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.692 05:26:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.692 ************************************ 00:06:42.692 START TEST event_scheduler 00:06:42.693 ************************************ 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:42.693 * Looking for test storage... 00:06:42.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:42.693 05:26:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:42.693 05:26:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1500295 00:06:42.693 05:26:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:42.693 05:26:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.693 05:26:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1500295 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1500295 ']' 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.693 [2024-07-25 05:26:36.142128] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:42.693 [2024-07-25 05:26:36.142200] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500295 ] 00:06:42.693 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.693 [2024-07-25 05:26:36.200154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.693 [2024-07-25 05:26:36.286883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.693 [2024-07-25 05:26:36.286949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.693 [2024-07-25 05:26:36.287014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.693 [2024-07-25 05:26:36.287017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:42.693 05:26:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.693 [2024-07-25 05:26:36.351829] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:42.693 [2024-07-25 05:26:36.351854] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:42.693 [2024-07-25 05:26:36.351887] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:42.693 [2024-07-25 05:26:36.351898] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:42.693 [2024-07-25 05:26:36.351908] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.693 05:26:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.693 05:26:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.950 [2024-07-25 05:26:36.444827] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:42.950 05:26:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.950 05:26:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:42.950 05:26:36 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.950 05:26:36 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.950 05:26:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.950 ************************************ 00:06:42.950 START TEST scheduler_create_thread 00:06:42.950 ************************************ 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.951 2 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.951 3 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.951 4 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.951 5 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.951 6 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.951 7 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.951 8 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.951 9 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.951 10 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.951 05:26:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.514 05:26:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.514 05:26:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:43.514 05:26:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:43.514 05:26:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.514 05:26:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.884 05:26:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.884 00:06:44.884 real 0m1.752s 00:06:44.884 user 0m0.010s 00:06:44.884 sys 0m0.005s 00:06:44.884 05:26:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.884 05:26:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.884 ************************************ 00:06:44.884 END TEST scheduler_create_thread 00:06:44.884 ************************************ 00:06:44.884 05:26:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:44.884 05:26:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1500295 00:06:44.884 05:26:38 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1500295 ']' 00:06:44.884 05:26:38 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1500295 00:06:44.884 05:26:38 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:44.884 05:26:38 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.884 05:26:38 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1500295 00:06:44.884 05:26:38 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:44.884 05:26:38 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:44.884 05:26:38 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1500295' 00:06:44.884 killing process with pid 1500295 00:06:44.884 05:26:38 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1500295 00:06:44.884 05:26:38 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1500295 00:06:45.141 [2024-07-25 05:26:38.704385] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:45.398 00:06:45.398 real 0m2.870s 00:06:45.398 user 0m3.711s 00:06:45.398 sys 0m0.327s 00:06:45.398 05:26:38 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.398 05:26:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.398 ************************************ 00:06:45.398 END TEST event_scheduler 00:06:45.398 ************************************ 00:06:45.398 05:26:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:45.398 05:26:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:45.398 05:26:38 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.398 05:26:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.398 05:26:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.398 ************************************ 00:06:45.398 START TEST app_repeat 00:06:45.398 ************************************ 00:06:45.398 05:26:38 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:45.398 05:26:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.398 05:26:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.398 05:26:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:45.398 05:26:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.398 05:26:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:45.398 05:26:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:45.399 05:26:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:45.399 05:26:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1500732 00:06:45.399 05:26:38 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:45.399 05:26:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.399 05:26:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1500732' 00:06:45.399 Process app_repeat pid: 1500732 00:06:45.399 05:26:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.399 05:26:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:45.399 spdk_app_start Round 0 00:06:45.399 05:26:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1500732 /var/tmp/spdk-nbd.sock 00:06:45.399 05:26:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1500732 ']' 00:06:45.399 05:26:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.399 05:26:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.399 05:26:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.399 05:26:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.399 05:26:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.399 [2024-07-25 05:26:39.001231] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:06:45.399 [2024-07-25 05:26:39.001323] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500732 ] 00:06:45.399 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.399 [2024-07-25 05:26:39.063402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.657 [2024-07-25 05:26:39.154378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.657 [2024-07-25 05:26:39.154384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.657 05:26:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.657 05:26:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:45.657 05:26:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.914 Malloc0 00:06:45.914 05:26:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.172 Malloc1 00:06:46.172 05:26:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.172 05:26:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.440 /dev/nbd0 00:06:46.440 05:26:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.440 05:26:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.440 1+0 records in 00:06:46.440 1+0 records out 00:06:46.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020246 s, 20.2 MB/s 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.440 05:26:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:46.440 05:26:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.440 05:26:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.440 05:26:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.697 /dev/nbd1 00:06:46.697 05:26:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.697 05:26:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.697 1+0 records in 00:06:46.697 1+0 records out 00:06:46.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018532 s, 22.1 MB/s 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.697 05:26:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:46.697 05:26:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.697 05:26:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.697 05:26:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.697 05:26:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.697 05:26:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.955 { 00:06:46.955 "nbd_device": "/dev/nbd0", 00:06:46.955 "bdev_name": "Malloc0" 00:06:46.955 }, 00:06:46.955 { 00:06:46.955 "nbd_device": "/dev/nbd1", 00:06:46.955 "bdev_name": "Malloc1" 00:06:46.955 } 00:06:46.955 ]' 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.955 { 00:06:46.955 "nbd_device": "/dev/nbd0", 00:06:46.955 "bdev_name": "Malloc0" 00:06:46.955 }, 00:06:46.955 { 00:06:46.955 "nbd_device": "/dev/nbd1", 00:06:46.955 "bdev_name": "Malloc1" 00:06:46.955 } 00:06:46.955 ]' 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.955 /dev/nbd1' 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.955 /dev/nbd1' 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.955 05:26:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:47.212 256+0 records in 00:06:47.212 256+0 records out 00:06:47.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506331 s, 207 MB/s 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.212 256+0 records in 00:06:47.212 256+0 records out 00:06:47.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214806 s, 48.8 MB/s 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.212 256+0 records in 00:06:47.212 256+0 records out 00:06:47.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277543 s, 37.8 MB/s 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.212 05:26:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.470 05:26:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.470 05:26:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.470 05:26:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.470 05:26:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.470 05:26:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.470 05:26:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.470 05:26:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.470 05:26:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.470 05:26:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.470 05:26:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.728 05:26:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.728 05:26:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.728 05:26:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.728 05:26:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.728 05:26:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.728 05:26:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.728 05:26:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.728 05:26:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.728 05:26:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.728 05:26:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.728 05:26:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.986 05:26:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.986 05:26:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.986 05:26:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.986 05:26:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.986 05:26:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.986 05:26:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.986 05:26:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.986 05:26:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.986 05:26:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.986 05:26:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.986 05:26:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.986 05:26:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.986 05:26:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.243 05:26:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:48.501 [2024-07-25 05:26:42.052194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.501 [2024-07-25 05:26:42.140505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.501 [2024-07-25 05:26:42.140505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.501 [2024-07-25 05:26:42.201167] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.501 [2024-07-25 05:26:42.201235] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.779 05:26:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.779 05:26:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:51.779 spdk_app_start Round 1 00:06:51.779 05:26:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1500732 /var/tmp/spdk-nbd.sock 00:06:51.779 05:26:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1500732 ']' 00:06:51.779 05:26:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.779 05:26:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.779 05:26:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.779 05:26:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.779 05:26:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.779 05:26:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.779 05:26:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:51.779 05:26:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.779 Malloc0 00:06:51.779 05:26:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.037 Malloc1 00:06:52.037 05:26:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.037 05:26:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.295 /dev/nbd0 00:06:52.295 05:26:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.295 05:26:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.295 1+0 records in 00:06:52.295 1+0 records out 00:06:52.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187403 s, 21.9 MB/s 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.295 05:26:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:52.295 05:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.295 05:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.295 05:26:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.553 /dev/nbd1 00:06:52.553 05:26:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.553 05:26:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.553 1+0 records in 00:06:52.553 1+0 records out 00:06:52.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226125 s, 18.1 MB/s 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.553 05:26:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:52.553 05:26:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.553 05:26:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.553 05:26:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.553 05:26:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.553 05:26:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.811 { 00:06:52.811 "nbd_device": "/dev/nbd0", 00:06:52.811 "bdev_name": "Malloc0" 00:06:52.811 }, 00:06:52.811 { 00:06:52.811 "nbd_device": "/dev/nbd1", 00:06:52.811 "bdev_name": "Malloc1" 00:06:52.811 } 00:06:52.811 ]' 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.811 { 00:06:52.811 "nbd_device": "/dev/nbd0", 00:06:52.811 "bdev_name": "Malloc0" 00:06:52.811 }, 00:06:52.811 { 00:06:52.811 "nbd_device": "/dev/nbd1", 00:06:52.811 "bdev_name": "Malloc1" 00:06:52.811 } 00:06:52.811 ]' 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.811 /dev/nbd1' 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.811 /dev/nbd1' 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.811 256+0 records in 00:06:52.811 256+0 records out 00:06:52.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509107 s, 206 MB/s 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.811 256+0 records in 00:06:52.811 256+0 records out 00:06:52.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273928 s, 38.3 MB/s 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.811 05:26:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.069 256+0 records in 00:06:53.069 256+0 records out 00:06:53.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257367 s, 40.7 MB/s 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.069 05:26:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.327 05:26:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.327 05:26:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.327 05:26:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.327 05:26:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.327 05:26:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.327 05:26:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.327 05:26:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.327 05:26:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.327 05:26:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.327 05:26:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.584 05:26:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.584 05:26:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.584 05:26:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.584 05:26:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.584 05:26:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.584 05:26:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.584 05:26:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.584 05:26:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.584 05:26:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.584 05:26:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.584 05:26:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.842 05:26:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.842 05:26:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.842 05:26:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.842 05:26:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.842 05:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.842 05:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.842 05:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.842 05:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.842 05:26:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.842 05:26:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.842 05:26:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.842 05:26:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.842 05:26:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.100 05:26:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:54.358 [2024-07-25 05:26:47.851506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.358 [2024-07-25 05:26:47.941171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.358 [2024-07-25 05:26:47.941176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.358 [2024-07-25 05:26:48.003603] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.358 [2024-07-25 05:26:48.003683] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.638 05:26:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.638 05:26:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:57.638 spdk_app_start Round 2 00:06:57.638 05:26:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1500732 /var/tmp/spdk-nbd.sock 00:06:57.638 05:26:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1500732 ']' 00:06:57.638 05:26:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.638 05:26:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.638 05:26:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.638 05:26:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.638 05:26:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.638 05:26:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.638 05:26:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:57.638 05:26:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.638 Malloc0 00:06:57.638 05:26:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.896 Malloc1 00:06:57.896 05:26:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.896 05:26:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.153 /dev/nbd0 00:06:58.153 05:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.153 05:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.153 1+0 records in 00:06:58.153 1+0 records out 00:06:58.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000154122 s, 26.6 MB/s 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:58.153 05:26:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:58.153 05:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.153 05:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.153 05:26:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:58.411 /dev/nbd1 00:06:58.411 05:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.411 05:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.411 1+0 records in 00:06:58.411 1+0 records out 00:06:58.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177577 s, 23.1 MB/s 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:58.411 05:26:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:58.411 05:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.411 05:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.411 05:26:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.411 05:26:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.411 05:26:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.668 { 00:06:58.668 "nbd_device": "/dev/nbd0", 00:06:58.668 "bdev_name": "Malloc0" 00:06:58.668 }, 00:06:58.668 { 00:06:58.668 "nbd_device": "/dev/nbd1", 00:06:58.668 "bdev_name": "Malloc1" 00:06:58.668 } 00:06:58.668 ]' 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.668 { 00:06:58.668 "nbd_device": "/dev/nbd0", 00:06:58.668 "bdev_name": "Malloc0" 00:06:58.668 }, 00:06:58.668 { 00:06:58.668 "nbd_device": "/dev/nbd1", 00:06:58.668 "bdev_name": "Malloc1" 00:06:58.668 } 00:06:58.668 ]' 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:58.668 /dev/nbd1' 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:58.668 /dev/nbd1' 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:58.668 256+0 records in 00:06:58.668 256+0 records out 00:06:58.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00531321 s, 197 MB/s 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:58.668 256+0 records in 00:06:58.668 256+0 records out 00:06:58.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213542 s, 49.1 MB/s 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:58.668 256+0 records in 00:06:58.668 256+0 records out 00:06:58.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292185 s, 35.9 MB/s 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:58.668 05:26:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.669 05:26:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:58.669 05:26:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.669 05:26:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:58.669 05:26:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.669 05:26:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:58.669 05:26:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.669 05:26:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.669 05:26:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.669 05:26:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:58.669 05:26:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.669 05:26:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:58.926 05:26:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:58.926 05:26:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:58.926 05:26:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:58.926 05:26:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.926 05:26:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.926 05:26:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:58.926 05:26:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.926 05:26:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.926 05:26:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.926 05:26:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.184 05:26:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.184 05:26:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.184 05:26:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.184 05:26:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.184 05:26:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.184 05:26:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.184 05:26:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.184 05:26:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.184 05:26:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.184 05:26:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.184 05:26:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.442 05:26:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.442 05:26:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.442 05:26:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.442 05:26:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.442 05:26:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.442 05:26:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.442 05:26:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:59.442 05:26:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.442 05:26:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.442 05:26:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:59.442 05:26:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:59.442 05:26:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:59.442 05:26:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.008 05:26:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:00.008 [2024-07-25 05:26:53.652896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.270 [2024-07-25 05:26:53.744019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.270 [2024-07-25 05:26:53.744023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.270 [2024-07-25 05:26:53.802360] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:00.270 [2024-07-25 05:26:53.802422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:02.798 05:26:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1500732 /var/tmp/spdk-nbd.sock 00:07:02.798 05:26:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1500732 ']' 00:07:02.798 05:26:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.798 05:26:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.798 05:26:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.798 05:26:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.798 05:26:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.056 05:26:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.056 05:26:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:03.056 05:26:56 event.app_repeat -- event/event.sh@39 -- # killprocess 1500732 00:07:03.056 05:26:56 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1500732 ']' 00:07:03.056 05:26:56 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1500732 00:07:03.056 05:26:56 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:03.056 05:26:56 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.056 05:26:56 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1500732 00:07:03.056 05:26:56 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.056 05:26:56 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.056 05:26:56 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1500732' 00:07:03.056 killing process with pid 1500732 00:07:03.056 05:26:56 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1500732 00:07:03.056 05:26:56 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1500732 00:07:03.314 spdk_app_start is called in Round 0. 00:07:03.314 Shutdown signal received, stop current app iteration 00:07:03.314 Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 reinitialization... 00:07:03.314 spdk_app_start is called in Round 1. 00:07:03.314 Shutdown signal received, stop current app iteration 00:07:03.314 Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 reinitialization... 00:07:03.314 spdk_app_start is called in Round 2. 00:07:03.314 Shutdown signal received, stop current app iteration 00:07:03.314 Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 reinitialization... 00:07:03.314 spdk_app_start is called in Round 3. 00:07:03.314 Shutdown signal received, stop current app iteration 00:07:03.314 05:26:56 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:03.314 05:26:56 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:03.314 00:07:03.314 real 0m17.935s 00:07:03.314 user 0m39.067s 00:07:03.314 sys 0m3.194s 00:07:03.314 05:26:56 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.314 05:26:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.314 ************************************ 00:07:03.314 END TEST app_repeat 00:07:03.314 ************************************ 00:07:03.314 05:26:56 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:03.314 05:26:56 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:03.314 05:26:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.314 05:26:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.314 05:26:56 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.315 ************************************ 00:07:03.315 START TEST cpu_locks 00:07:03.315 ************************************ 00:07:03.315 05:26:56 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:03.315 * Looking for test storage... 00:07:03.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:03.573 05:26:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:03.573 05:26:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:03.573 05:26:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:03.573 05:26:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:03.573 05:26:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.573 05:26:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.573 05:26:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.573 ************************************ 00:07:03.573 START TEST default_locks 00:07:03.573 ************************************ 00:07:03.573 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:03.573 05:26:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1503085 00:07:03.573 05:26:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.573 05:26:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1503085 00:07:03.573 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1503085 ']' 00:07:03.573 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.573 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.573 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.573 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.573 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.573 [2024-07-25 05:26:57.094742] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:03.573 [2024-07-25 05:26:57.094820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503085 ] 00:07:03.573 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.573 [2024-07-25 05:26:57.153147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.573 [2024-07-25 05:26:57.241533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.831 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.831 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:03.831 05:26:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1503085 00:07:03.831 05:26:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1503085 00:07:03.831 05:26:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.396 lslocks: write error 00:07:04.396 05:26:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1503085 00:07:04.396 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1503085 ']' 00:07:04.396 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1503085 00:07:04.396 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:04.396 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.396 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503085 00:07:04.396 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.396 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.396 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503085' 00:07:04.396 killing process with pid 1503085 00:07:04.396 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1503085 00:07:04.396 05:26:57 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1503085 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1503085 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1503085 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1503085 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1503085 ']' 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1503085) - No such process 00:07:04.654 ERROR: process (pid: 1503085) is no longer running 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:04.654 05:26:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:04.655 05:26:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:04.655 00:07:04.655 real 0m1.288s 00:07:04.655 user 0m1.212s 00:07:04.655 sys 0m0.569s 00:07:04.655 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.655 05:26:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.655 ************************************ 00:07:04.655 END TEST default_locks 00:07:04.655 ************************************ 00:07:04.655 05:26:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:04.655 05:26:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.655 05:26:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.655 05:26:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.913 ************************************ 00:07:04.913 START TEST default_locks_via_rpc 00:07:04.913 ************************************ 00:07:04.913 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:04.913 05:26:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1503253 00:07:04.913 05:26:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.913 05:26:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1503253 00:07:04.913 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1503253 ']' 00:07:04.913 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.913 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.913 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.913 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.913 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.913 [2024-07-25 05:26:58.430941] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:04.913 [2024-07-25 05:26:58.431038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503253 ] 00:07:04.913 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.913 [2024-07-25 05:26:58.489140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.913 [2024-07-25 05:26:58.577502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1503253 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1503253 00:07:05.171 05:26:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.429 05:26:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1503253 00:07:05.429 05:26:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1503253 ']' 00:07:05.429 05:26:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1503253 00:07:05.429 05:26:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:05.429 05:26:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.429 05:26:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503253 00:07:05.429 05:26:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.429 05:26:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.429 05:26:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503253' 00:07:05.429 killing process with pid 1503253 00:07:05.429 05:26:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1503253 00:07:05.429 05:26:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1503253 00:07:05.993 00:07:05.993 real 0m1.140s 00:07:05.993 user 0m1.082s 00:07:05.993 sys 0m0.540s 00:07:05.993 05:26:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.993 05:26:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.993 ************************************ 00:07:05.993 END TEST default_locks_via_rpc 00:07:05.993 ************************************ 00:07:05.993 05:26:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:05.993 05:26:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.993 05:26:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.993 05:26:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.993 ************************************ 00:07:05.993 START TEST non_locking_app_on_locked_coremask 00:07:05.993 ************************************ 00:07:05.993 05:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:05.993 05:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1503420 00:07:05.993 05:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.993 05:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1503420 /var/tmp/spdk.sock 00:07:05.993 05:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1503420 ']' 00:07:05.993 05:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.993 05:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.993 05:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.993 05:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.993 05:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.993 [2024-07-25 05:26:59.617365] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:05.993 [2024-07-25 05:26:59.617465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503420 ] 00:07:05.993 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.993 [2024-07-25 05:26:59.674325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.251 [2024-07-25 05:26:59.763822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.508 05:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.508 05:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:06.508 05:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1503435 00:07:06.508 05:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:06.508 05:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1503435 /var/tmp/spdk2.sock 00:07:06.508 05:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1503435 ']' 00:07:06.508 05:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.508 05:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.508 05:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.508 05:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.508 05:27:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.508 [2024-07-25 05:27:00.063032] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:06.508 [2024-07-25 05:27:00.063129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503435 ] 00:07:06.508 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.508 [2024-07-25 05:27:00.157956] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.508 [2024-07-25 05:27:00.157991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.765 [2024-07-25 05:27:00.341788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.331 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.331 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:07.331 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1503420 00:07:07.331 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1503420 00:07:07.331 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.896 lslocks: write error 00:07:07.896 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1503420 00:07:07.896 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1503420 ']' 00:07:07.896 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1503420 00:07:07.896 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:07.896 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.896 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503420 00:07:07.896 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.896 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.896 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503420' 00:07:07.896 killing process with pid 1503420 00:07:07.896 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1503420 00:07:07.896 05:27:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1503420 00:07:08.829 05:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1503435 00:07:08.829 05:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1503435 ']' 00:07:08.829 05:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1503435 00:07:08.829 05:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:08.829 05:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.829 05:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503435 00:07:08.829 05:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.829 05:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.829 05:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503435' 00:07:08.829 killing process with pid 1503435 00:07:08.829 05:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1503435 00:07:08.829 05:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1503435 00:07:09.087 00:07:09.087 real 0m3.181s 00:07:09.087 user 0m3.328s 00:07:09.087 sys 0m1.080s 00:07:09.087 05:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.087 05:27:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.087 ************************************ 00:07:09.087 END TEST non_locking_app_on_locked_coremask 00:07:09.087 ************************************ 00:07:09.087 05:27:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:09.087 05:27:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.087 05:27:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.087 05:27:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.345 ************************************ 00:07:09.345 START TEST locking_app_on_unlocked_coremask 00:07:09.345 ************************************ 00:07:09.345 05:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:09.345 05:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1503966 00:07:09.345 05:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:09.345 05:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1503966 /var/tmp/spdk.sock 00:07:09.345 05:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1503966 ']' 00:07:09.345 05:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.345 05:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.345 05:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.345 05:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.345 05:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.345 [2024-07-25 05:27:02.847697] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:09.345 [2024-07-25 05:27:02.847792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503966 ] 00:07:09.345 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.345 [2024-07-25 05:27:02.904707] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.345 [2024-07-25 05:27:02.904739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.345 [2024-07-25 05:27:02.993449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.603 05:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.603 05:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:09.603 05:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1503970 00:07:09.603 05:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:09.603 05:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1503970 /var/tmp/spdk2.sock 00:07:09.603 05:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1503970 ']' 00:07:09.603 05:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.603 05:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.603 05:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.603 05:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.603 05:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.603 [2024-07-25 05:27:03.300754] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:09.603 [2024-07-25 05:27:03.300840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503970 ] 00:07:09.860 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.860 [2024-07-25 05:27:03.399133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.117 [2024-07-25 05:27:03.585173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.682 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.682 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:10.682 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1503970 00:07:10.682 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1503970 00:07:10.682 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.248 lslocks: write error 00:07:11.248 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1503966 00:07:11.248 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1503966 ']' 00:07:11.248 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1503966 00:07:11.248 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:11.248 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.248 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503966 00:07:11.248 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.248 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.248 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503966' 00:07:11.248 killing process with pid 1503966 00:07:11.248 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1503966 00:07:11.248 05:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1503966 00:07:12.182 05:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1503970 00:07:12.182 05:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1503970 ']' 00:07:12.182 05:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1503970 00:07:12.182 05:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:12.182 05:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.182 05:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503970 00:07:12.182 05:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.182 05:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.182 05:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503970' 00:07:12.182 killing process with pid 1503970 00:07:12.182 05:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1503970 00:07:12.182 05:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1503970 00:07:12.441 00:07:12.441 real 0m3.272s 00:07:12.441 user 0m3.409s 00:07:12.441 sys 0m1.106s 00:07:12.441 05:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.441 05:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.441 ************************************ 00:07:12.441 END TEST locking_app_on_unlocked_coremask 00:07:12.441 ************************************ 00:07:12.441 05:27:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:12.441 05:27:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.441 05:27:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.441 05:27:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.441 ************************************ 00:07:12.441 START TEST locking_app_on_locked_coremask 00:07:12.441 ************************************ 00:07:12.441 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:12.441 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1504396 00:07:12.441 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.441 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1504396 /var/tmp/spdk.sock 00:07:12.441 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1504396 ']' 00:07:12.441 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.441 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.441 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.441 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.441 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.699 [2024-07-25 05:27:06.168647] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:12.699 [2024-07-25 05:27:06.168748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504396 ] 00:07:12.699 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.699 [2024-07-25 05:27:06.231958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.699 [2024-07-25 05:27:06.326445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1504410 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1504410 /var/tmp/spdk2.sock 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1504410 /var/tmp/spdk2.sock 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1504410 /var/tmp/spdk2.sock 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1504410 ']' 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.957 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.958 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.958 05:27:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.958 [2024-07-25 05:27:06.644292] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:12.958 [2024-07-25 05:27:06.644391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504410 ] 00:07:13.254 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.254 [2024-07-25 05:27:06.742855] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1504396 has claimed it. 00:07:13.254 [2024-07-25 05:27:06.742932] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:13.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1504410) - No such process 00:07:13.819 ERROR: process (pid: 1504410) is no longer running 00:07:13.819 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.819 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:13.819 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:13.819 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.819 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:13.819 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.819 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1504396 00:07:13.819 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1504396 00:07:13.819 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.077 lslocks: write error 00:07:14.077 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1504396 00:07:14.077 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1504396 ']' 00:07:14.077 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1504396 00:07:14.077 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:14.077 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.077 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1504396 00:07:14.077 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.077 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.077 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1504396' 00:07:14.077 killing process with pid 1504396 00:07:14.077 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1504396 00:07:14.077 05:27:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1504396 00:07:14.336 00:07:14.336 real 0m1.895s 00:07:14.336 user 0m2.040s 00:07:14.336 sys 0m0.625s 00:07:14.336 05:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.336 05:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.336 ************************************ 00:07:14.336 END TEST locking_app_on_locked_coremask 00:07:14.336 ************************************ 00:07:14.336 05:27:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:14.336 05:27:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.336 05:27:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.336 05:27:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.594 ************************************ 00:07:14.594 START TEST locking_overlapped_coremask 00:07:14.594 ************************************ 00:07:14.594 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:14.595 05:27:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1505011 00:07:14.595 05:27:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1505011 /var/tmp/spdk.sock 00:07:14.595 05:27:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:14.595 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1505011 ']' 00:07:14.595 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.595 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.595 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.595 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.595 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.595 [2024-07-25 05:27:08.110414] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:14.595 [2024-07-25 05:27:08.110505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505011 ] 00:07:14.595 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.595 [2024-07-25 05:27:08.172169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.595 [2024-07-25 05:27:08.267125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.595 [2024-07-25 05:27:08.267190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.595 [2024-07-25 05:27:08.267192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1505104 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1505104 /var/tmp/spdk2.sock 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1505104 /var/tmp/spdk2.sock 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1505104 /var/tmp/spdk2.sock 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1505104 ']' 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.853 05:27:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.111 [2024-07-25 05:27:08.563880] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:15.111 [2024-07-25 05:27:08.563983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505104 ] 00:07:15.111 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.111 [2024-07-25 05:27:08.653747] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1505011 has claimed it. 00:07:15.111 [2024-07-25 05:27:08.653802] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:15.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1505104) - No such process 00:07:15.677 ERROR: process (pid: 1505104) is no longer running 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1505011 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1505011 ']' 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1505011 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1505011 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1505011' 00:07:15.677 killing process with pid 1505011 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1505011 00:07:15.677 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1505011 00:07:16.243 00:07:16.243 real 0m1.618s 00:07:16.243 user 0m4.375s 00:07:16.243 sys 0m0.440s 00:07:16.243 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.243 05:27:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.243 ************************************ 00:07:16.243 END TEST locking_overlapped_coremask 00:07:16.243 ************************************ 00:07:16.243 05:27:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:16.243 05:27:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.243 05:27:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.243 05:27:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.243 ************************************ 00:07:16.243 START TEST locking_overlapped_coremask_via_rpc 00:07:16.243 ************************************ 00:07:16.243 05:27:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:16.243 05:27:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1505379 00:07:16.243 05:27:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:16.243 05:27:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1505379 /var/tmp/spdk.sock 00:07:16.243 05:27:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1505379 ']' 00:07:16.243 05:27:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.243 05:27:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.243 05:27:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.243 05:27:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.243 05:27:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.244 [2024-07-25 05:27:09.773742] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:16.244 [2024-07-25 05:27:09.773828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505379 ] 00:07:16.244 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.244 [2024-07-25 05:27:09.830024] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.244 [2024-07-25 05:27:09.830062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.244 [2024-07-25 05:27:09.919429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.244 [2024-07-25 05:27:09.919486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.244 [2024-07-25 05:27:09.919489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.501 05:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.501 05:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:16.501 05:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1505406 00:07:16.501 05:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1505406 /var/tmp/spdk2.sock 00:07:16.501 05:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:16.501 05:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1505406 ']' 00:07:16.501 05:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.501 05:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.501 05:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.501 05:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.501 05:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.759 [2024-07-25 05:27:10.221730] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:16.759 [2024-07-25 05:27:10.221841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505406 ] 00:07:16.759 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.759 [2024-07-25 05:27:10.313804] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.759 [2024-07-25 05:27:10.313839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.016 [2024-07-25 05:27:10.489767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.016 [2024-07-25 05:27:10.489831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:17.016 [2024-07-25 05:27:10.489833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.580 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.580 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:17.580 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:17.580 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.580 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.581 [2024-07-25 05:27:11.171347] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1505379 has claimed it. 00:07:17.581 request: 00:07:17.581 { 00:07:17.581 "method": "framework_enable_cpumask_locks", 00:07:17.581 "req_id": 1 00:07:17.581 } 00:07:17.581 Got JSON-RPC error response 00:07:17.581 response: 00:07:17.581 { 00:07:17.581 "code": -32603, 00:07:17.581 "message": "Failed to claim CPU core: 2" 00:07:17.581 } 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1505379 /var/tmp/spdk.sock 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1505379 ']' 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.581 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.837 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.837 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:17.837 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1505406 /var/tmp/spdk2.sock 00:07:17.837 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1505406 ']' 00:07:17.837 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.837 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.837 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.837 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.837 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.094 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.094 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:18.094 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:18.094 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:18.094 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:18.094 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:18.094 00:07:18.094 real 0m1.962s 00:07:18.094 user 0m1.013s 00:07:18.094 sys 0m0.195s 00:07:18.094 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.094 05:27:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.094 ************************************ 00:07:18.094 END TEST locking_overlapped_coremask_via_rpc 00:07:18.094 ************************************ 00:07:18.094 05:27:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:18.094 05:27:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1505379 ]] 00:07:18.094 05:27:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1505379 00:07:18.094 05:27:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1505379 ']' 00:07:18.094 05:27:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1505379 00:07:18.094 05:27:11 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:18.094 05:27:11 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.094 05:27:11 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1505379 00:07:18.094 05:27:11 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.094 05:27:11 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.094 05:27:11 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1505379' 00:07:18.094 killing process with pid 1505379 00:07:18.094 05:27:11 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1505379 00:07:18.094 05:27:11 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1505379 00:07:18.658 05:27:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1505406 ]] 00:07:18.658 05:27:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1505406 00:07:18.658 05:27:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1505406 ']' 00:07:18.658 05:27:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1505406 00:07:18.658 05:27:12 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:18.658 05:27:12 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.658 05:27:12 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1505406 00:07:18.658 05:27:12 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:18.658 05:27:12 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:18.658 05:27:12 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1505406' 00:07:18.658 killing process with pid 1505406 00:07:18.658 05:27:12 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1505406 00:07:18.658 05:27:12 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1505406 00:07:18.915 05:27:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:18.915 05:27:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:18.915 05:27:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1505379 ]] 00:07:18.915 05:27:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1505379 00:07:18.915 05:27:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1505379 ']' 00:07:18.915 05:27:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1505379 00:07:18.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1505379) - No such process 00:07:18.915 05:27:12 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1505379 is not found' 00:07:18.915 Process with pid 1505379 is not found 00:07:18.915 05:27:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1505406 ]] 00:07:18.915 05:27:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1505406 00:07:18.915 05:27:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1505406 ']' 00:07:18.915 05:27:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1505406 00:07:18.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1505406) - No such process 00:07:18.916 05:27:12 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1505406 is not found' 00:07:18.916 Process with pid 1505406 is not found 00:07:18.916 05:27:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:18.916 00:07:18.916 real 0m15.600s 00:07:18.916 user 0m27.137s 00:07:18.916 sys 0m5.463s 00:07:18.916 05:27:12 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.916 05:27:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.916 ************************************ 00:07:18.916 END TEST cpu_locks 00:07:18.916 ************************************ 00:07:18.916 00:07:18.916 real 0m40.504s 00:07:18.916 user 1m16.534s 00:07:18.916 sys 0m9.469s 00:07:18.916 05:27:12 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.916 05:27:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.916 ************************************ 00:07:18.916 END TEST event 00:07:18.916 ************************************ 00:07:18.916 05:27:12 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:18.916 05:27:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.916 05:27:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.916 05:27:12 -- common/autotest_common.sh@10 -- # set +x 00:07:19.173 ************************************ 00:07:19.173 START TEST thread 00:07:19.173 ************************************ 00:07:19.173 05:27:12 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:19.173 * Looking for test storage... 00:07:19.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:19.173 05:27:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:19.173 05:27:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:19.173 05:27:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.173 05:27:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.173 ************************************ 00:07:19.173 START TEST thread_poller_perf 00:07:19.173 ************************************ 00:07:19.173 05:27:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:19.173 [2024-07-25 05:27:12.708020] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:19.173 [2024-07-25 05:27:12.708070] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505871 ] 00:07:19.173 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.173 [2024-07-25 05:27:12.767577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.173 [2024-07-25 05:27:12.856641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.173 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:20.543 ====================================== 00:07:20.543 busy:2714829407 (cyc) 00:07:20.543 total_run_count: 291000 00:07:20.543 tsc_hz: 2700000000 (cyc) 00:07:20.543 ====================================== 00:07:20.543 poller_cost: 9329 (cyc), 3455 (nsec) 00:07:20.543 00:07:20.543 real 0m1.252s 00:07:20.543 user 0m1.173s 00:07:20.543 sys 0m0.073s 00:07:20.543 05:27:13 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.543 05:27:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.543 ************************************ 00:07:20.543 END TEST thread_poller_perf 00:07:20.543 ************************************ 00:07:20.543 05:27:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:20.543 05:27:13 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:20.543 05:27:13 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.543 05:27:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.543 ************************************ 00:07:20.543 START TEST thread_poller_perf 00:07:20.543 ************************************ 00:07:20.543 05:27:14 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:20.543 [2024-07-25 05:27:14.015116] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:20.543 [2024-07-25 05:27:14.015183] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506026 ] 00:07:20.543 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.543 [2024-07-25 05:27:14.078933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.543 [2024-07-25 05:27:14.178492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.543 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:21.913 ====================================== 00:07:21.913 busy:2702969686 (cyc) 00:07:21.913 total_run_count: 3921000 00:07:21.913 tsc_hz: 2700000000 (cyc) 00:07:21.913 ====================================== 00:07:21.913 poller_cost: 689 (cyc), 255 (nsec) 00:07:21.913 00:07:21.913 real 0m1.259s 00:07:21.913 user 0m1.173s 00:07:21.913 sys 0m0.081s 00:07:21.913 05:27:15 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.913 05:27:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:21.913 ************************************ 00:07:21.913 END TEST thread_poller_perf 00:07:21.913 ************************************ 00:07:21.913 05:27:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:21.913 00:07:21.913 real 0m2.654s 00:07:21.913 user 0m2.405s 00:07:21.913 sys 0m0.248s 00:07:21.913 05:27:15 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.913 05:27:15 thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.913 ************************************ 00:07:21.913 END TEST thread 00:07:21.913 ************************************ 00:07:21.913 05:27:15 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:21.913 05:27:15 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:21.913 05:27:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.913 05:27:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.913 05:27:15 -- common/autotest_common.sh@10 -- # set +x 00:07:21.913 ************************************ 00:07:21.913 START TEST app_cmdline 00:07:21.913 ************************************ 00:07:21.913 05:27:15 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:21.913 * Looking for test storage... 00:07:21.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:21.913 05:27:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:21.913 05:27:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1506225 00:07:21.913 05:27:15 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:21.913 05:27:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1506225 00:07:21.913 05:27:15 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1506225 ']' 00:07:21.913 05:27:15 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.913 05:27:15 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.913 05:27:15 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.913 05:27:15 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.913 05:27:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.913 [2024-07-25 05:27:15.422189] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:21.913 [2024-07-25 05:27:15.422305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506225 ] 00:07:21.913 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.913 [2024-07-25 05:27:15.479779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.913 [2024-07-25 05:27:15.569468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.171 05:27:15 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.171 05:27:15 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:22.171 05:27:15 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:22.429 { 00:07:22.429 "version": "SPDK v24.09-pre git sha1 d005e023b", 00:07:22.429 "fields": { 00:07:22.429 "major": 24, 00:07:22.429 "minor": 9, 00:07:22.429 "patch": 0, 00:07:22.429 "suffix": "-pre", 00:07:22.429 "commit": "d005e023b" 00:07:22.429 } 00:07:22.429 } 00:07:22.429 05:27:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:22.429 05:27:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:22.429 05:27:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:22.429 05:27:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:22.429 05:27:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:22.429 05:27:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:22.429 05:27:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.429 05:27:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:22.429 05:27:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:22.429 05:27:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:22.429 05:27:16 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:22.687 request: 00:07:22.687 { 00:07:22.687 "method": "env_dpdk_get_mem_stats", 00:07:22.687 "req_id": 1 00:07:22.687 } 00:07:22.687 Got JSON-RPC error response 00:07:22.687 response: 00:07:22.687 { 00:07:22.687 "code": -32601, 00:07:22.687 "message": "Method not found" 00:07:22.687 } 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.687 05:27:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1506225 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1506225 ']' 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1506225 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1506225 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1506225' 00:07:22.687 killing process with pid 1506225 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@969 -- # kill 1506225 00:07:22.687 05:27:16 app_cmdline -- common/autotest_common.sh@974 -- # wait 1506225 00:07:23.254 00:07:23.254 real 0m1.456s 00:07:23.254 user 0m1.780s 00:07:23.254 sys 0m0.454s 00:07:23.254 05:27:16 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.254 05:27:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:23.254 ************************************ 00:07:23.254 END TEST app_cmdline 00:07:23.254 ************************************ 00:07:23.254 05:27:16 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:23.254 05:27:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.254 05:27:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.254 05:27:16 -- common/autotest_common.sh@10 -- # set +x 00:07:23.254 ************************************ 00:07:23.254 START TEST version 00:07:23.254 ************************************ 00:07:23.254 05:27:16 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:23.254 * Looking for test storage... 00:07:23.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:23.254 05:27:16 version -- app/version.sh@17 -- # get_header_version major 00:07:23.254 05:27:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:23.254 05:27:16 version -- app/version.sh@14 -- # cut -f2 00:07:23.254 05:27:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.254 05:27:16 version -- app/version.sh@17 -- # major=24 00:07:23.254 05:27:16 version -- app/version.sh@18 -- # get_header_version minor 00:07:23.254 05:27:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:23.254 05:27:16 version -- app/version.sh@14 -- # cut -f2 00:07:23.254 05:27:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.254 05:27:16 version -- app/version.sh@18 -- # minor=9 00:07:23.254 05:27:16 version -- app/version.sh@19 -- # get_header_version patch 00:07:23.254 05:27:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:23.254 05:27:16 version -- app/version.sh@14 -- # cut -f2 00:07:23.254 05:27:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.254 05:27:16 version -- app/version.sh@19 -- # patch=0 00:07:23.254 05:27:16 version -- app/version.sh@20 -- # get_header_version suffix 00:07:23.254 05:27:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:23.254 05:27:16 version -- app/version.sh@14 -- # cut -f2 00:07:23.254 05:27:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.254 05:27:16 version -- app/version.sh@20 -- # suffix=-pre 00:07:23.254 05:27:16 version -- app/version.sh@22 -- # version=24.9 00:07:23.254 05:27:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:23.254 05:27:16 version -- app/version.sh@28 -- # version=24.9rc0 00:07:23.254 05:27:16 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:23.254 05:27:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:23.254 05:27:16 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:23.254 05:27:16 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:23.254 00:07:23.254 real 0m0.109s 00:07:23.254 user 0m0.067s 00:07:23.254 sys 0m0.063s 00:07:23.254 05:27:16 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.254 05:27:16 version -- common/autotest_common.sh@10 -- # set +x 00:07:23.254 ************************************ 00:07:23.254 END TEST version 00:07:23.254 ************************************ 00:07:23.513 05:27:16 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:23.513 05:27:16 -- spdk/autotest.sh@202 -- # uname -s 00:07:23.513 05:27:16 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:23.513 05:27:16 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:23.513 05:27:16 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:23.513 05:27:16 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:23.513 05:27:16 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:23.513 05:27:16 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:23.513 05:27:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:23.513 05:27:16 -- common/autotest_common.sh@10 -- # set +x 00:07:23.513 05:27:16 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:23.513 05:27:16 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:23.513 05:27:16 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:23.513 05:27:16 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:23.513 05:27:16 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:23.513 05:27:16 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:23.513 05:27:16 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:23.513 05:27:16 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:23.513 05:27:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.513 05:27:16 -- common/autotest_common.sh@10 -- # set +x 00:07:23.513 ************************************ 00:07:23.513 START TEST nvmf_tcp 00:07:23.513 ************************************ 00:07:23.513 05:27:17 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:23.513 * Looking for test storage... 00:07:23.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:23.513 05:27:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:23.513 05:27:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:23.513 05:27:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:23.513 05:27:17 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:23.513 05:27:17 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.513 05:27:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.513 ************************************ 00:07:23.513 START TEST nvmf_target_core 00:07:23.513 ************************************ 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:23.513 * Looking for test storage... 00:07:23.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.513 05:27:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:23.514 ************************************ 00:07:23.514 START TEST nvmf_abort 00:07:23.514 ************************************ 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:23.514 * Looking for test storage... 00:07:23.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.514 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.772 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:23.772 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:23.772 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:23.772 05:27:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:25.673 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:25.673 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:25.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:25.673 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:25.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:07:25.673 00:07:25.673 --- 10.0.0.2 ping statistics --- 00:07:25.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.673 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:07:25.673 00:07:25.673 --- 10.0.0.1 ping statistics --- 00:07:25.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.673 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.673 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1508164 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1508164 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1508164 ']' 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.674 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.674 [2024-07-25 05:27:19.243509] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:25.674 [2024-07-25 05:27:19.243621] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.674 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.674 [2024-07-25 05:27:19.314317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.958 [2024-07-25 05:27:19.408311] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.958 [2024-07-25 05:27:19.408371] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.958 [2024-07-25 05:27:19.408393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.958 [2024-07-25 05:27:19.408407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.958 [2024-07-25 05:27:19.408419] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.958 [2024-07-25 05:27:19.408503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.958 [2024-07-25 05:27:19.408583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.958 [2024-07-25 05:27:19.408585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.958 [2024-07-25 05:27:19.553705] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.958 Malloc0 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.958 Delay0 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.958 [2024-07-25 05:27:19.626473] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.958 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:26.216 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.216 [2024-07-25 05:27:19.691524] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:28.743 Initializing NVMe Controllers 00:07:28.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:28.743 controller IO queue size 128 less than required 00:07:28.743 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:28.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:28.743 Initialization complete. Launching workers. 00:07:28.743 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33946 00:07:28.743 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34007, failed to submit 62 00:07:28.743 success 33950, unsuccess 57, failed 0 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.743 rmmod nvme_tcp 00:07:28.743 rmmod nvme_fabrics 00:07:28.743 rmmod nvme_keyring 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1508164 ']' 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1508164 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1508164 ']' 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1508164 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1508164 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1508164' 00:07:28.743 killing process with pid 1508164 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1508164 00:07:28.743 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1508164 00:07:28.743 05:27:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:28.743 05:27:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:28.743 05:27:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:28.743 05:27:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.743 05:27:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.743 05:27:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.743 05:27:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.743 05:27:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:30.639 00:07:30.639 real 0m7.071s 00:07:30.639 user 0m10.587s 00:07:30.639 sys 0m2.332s 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.639 ************************************ 00:07:30.639 END TEST nvmf_abort 00:07:30.639 ************************************ 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.639 ************************************ 00:07:30.639 START TEST nvmf_ns_hotplug_stress 00:07:30.639 ************************************ 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:30.639 * Looking for test storage... 00:07:30.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.639 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.897 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:32.796 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:32.796 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:32.796 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:32.796 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:32.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:07:32.796 00:07:32.796 --- 10.0.0.2 ping statistics --- 00:07:32.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.796 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:07:32.796 00:07:32.796 --- 10.0.0.1 ping statistics --- 00:07:32.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.796 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1510491 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1510491 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1510491 ']' 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.796 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:33.053 [2024-07-25 05:27:26.501744] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:07:33.053 [2024-07-25 05:27:26.501826] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.053 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.053 [2024-07-25 05:27:26.574106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.053 [2024-07-25 05:27:26.669381] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.053 [2024-07-25 05:27:26.669433] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.053 [2024-07-25 05:27:26.669448] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.053 [2024-07-25 05:27:26.669460] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.053 [2024-07-25 05:27:26.669471] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.053 [2024-07-25 05:27:26.669566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.053 [2024-07-25 05:27:26.669632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.053 [2024-07-25 05:27:26.669635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.311 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.311 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:33.311 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:33.311 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.311 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:33.311 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.311 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:33.311 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:33.311 [2024-07-25 05:27:27.011609] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.568 05:27:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:33.826 05:27:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.084 [2024-07-25 05:27:27.531927] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.084 05:27:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.341 05:27:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:34.341 Malloc0 00:07:34.599 05:27:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:34.599 Delay0 00:07:34.599 05:27:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.856 05:27:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:35.114 NULL1 00:07:35.114 05:27:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:35.371 05:27:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1510801 00:07:35.371 05:27:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:35.371 05:27:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:35.371 05:27:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.371 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.742 Read completed with error (sct=0, sc=11) 00:07:36.742 05:27:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.742 05:27:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:36.742 05:27:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:37.000 true 00:07:37.000 05:27:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:37.000 05:27:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.930 05:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.188 05:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:38.188 05:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:38.445 true 00:07:38.445 05:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:38.445 05:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.703 05:27:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.961 05:27:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:38.961 05:27:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:38.961 true 00:07:38.961 05:27:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:38.961 05:27:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.893 05:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.151 05:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:40.151 05:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:40.420 true 00:07:40.420 05:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:40.420 05:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.677 05:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.935 05:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:40.935 05:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:41.192 true 00:07:41.192 05:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:41.192 05:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.123 05:27:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.381 05:27:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:42.381 05:27:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:42.381 true 00:07:42.381 05:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:42.381 05:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.639 05:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.896 05:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:42.896 05:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:43.153 true 00:07:43.153 05:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:43.153 05:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.086 05:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.343 05:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:44.343 05:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:44.601 true 00:07:44.601 05:27:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:44.601 05:27:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.859 05:27:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.117 05:27:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:45.117 05:27:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:45.374 true 00:07:45.374 05:27:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:45.374 05:27:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.307 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.565 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:46.565 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:46.822 true 00:07:46.822 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:46.822 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.080 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.337 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:47.337 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:47.595 true 00:07:47.595 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:47.595 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.527 05:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.785 05:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:48.785 05:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:49.042 true 00:07:49.042 05:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:49.042 05:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.299 05:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.556 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:49.556 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:49.813 true 00:07:49.813 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:49.813 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.745 05:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.002 05:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:51.002 05:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:51.259 true 00:07:51.259 05:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:51.259 05:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.517 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.774 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:51.774 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:52.031 true 00:07:52.031 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:52.031 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.998 05:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.998 05:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:52.998 05:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:53.255 true 00:07:53.255 05:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:53.255 05:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.513 05:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.770 05:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:53.770 05:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:54.028 true 00:07:54.028 05:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:54.028 05:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.959 05:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.959 05:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:54.959 05:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:55.216 true 00:07:55.216 05:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:55.216 05:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.472 05:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.729 05:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:55.729 05:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:55.986 true 00:07:55.986 05:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:55.986 05:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.917 05:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.173 05:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:57.173 05:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:57.429 true 00:07:57.429 05:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:57.429 05:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.685 05:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.942 05:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:57.942 05:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:58.198 true 00:07:58.198 05:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:58.198 05:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.129 05:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.386 05:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:59.386 05:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:59.643 true 00:07:59.643 05:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:07:59.643 05:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.899 05:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.156 05:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:00.156 05:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:00.412 true 00:08:00.412 05:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:08:00.412 05:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.343 05:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.601 05:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:01.601 05:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:01.601 true 00:08:01.601 05:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:08:01.601 05:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.858 05:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.116 05:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:02.116 05:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:02.374 true 00:08:02.374 05:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:08:02.374 05:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.307 05:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.565 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:03.565 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:03.823 true 00:08:03.823 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:08:03.823 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.080 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.336 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:04.337 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:04.337 true 00:08:04.594 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:08:04.594 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.851 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.851 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:04.851 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:05.148 true 00:08:05.148 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:08:05.148 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.520 Initializing NVMe Controllers 00:08:06.520 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:06.520 Controller IO queue size 128, less than required. 00:08:06.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:06.520 Controller IO queue size 128, less than required. 00:08:06.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:06.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:06.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:06.520 Initialization complete. Launching workers. 00:08:06.520 ======================================================== 00:08:06.520 Latency(us) 00:08:06.520 Device Information : IOPS MiB/s Average min max 00:08:06.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 733.82 0.36 96464.39 2890.77 1069734.47 00:08:06.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11547.69 5.64 11084.90 1833.23 364283.14 00:08:06.520 ======================================================== 00:08:06.520 Total : 12281.51 6.00 16186.34 1833.23 1069734.47 00:08:06.520 00:08:06.520 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.520 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:06.520 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:06.778 true 00:08:06.778 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1510801 00:08:06.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1510801) - No such process 00:08:06.778 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1510801 00:08:06.778 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.035 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.293 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:07.293 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:07.293 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:07.293 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:07.293 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:07.550 null0 00:08:07.550 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:07.550 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:07.550 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:07.807 null1 00:08:07.807 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:07.807 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:07.807 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:08.064 null2 00:08:08.064 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.064 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.064 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:08.321 null3 00:08:08.321 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.321 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.321 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:08.579 null4 00:08:08.579 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.579 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.579 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:08.579 null5 00:08:08.836 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.836 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.836 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:08.836 null6 00:08:08.836 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.836 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.836 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:09.095 null7 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1514946 1514949 1514952 1514955 1514959 1514962 1514965 1514969 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.095 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.353 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.353 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.611 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.611 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.611 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.611 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.611 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.612 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.880 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:10.144 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.144 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.144 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.144 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:10.144 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.144 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.144 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.144 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.401 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:10.658 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.659 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.659 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:10.659 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.659 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.659 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.659 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.659 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.916 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.174 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.174 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.174 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.174 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.174 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.174 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.174 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.174 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.431 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.431 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.432 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.432 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.432 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.432 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.432 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.432 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.432 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.689 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.689 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.689 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.690 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.690 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.690 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.690 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.690 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.947 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.204 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.204 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.204 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.204 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.204 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.204 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.204 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.204 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.462 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.720 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.720 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.720 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.720 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.720 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.720 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.720 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.720 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.978 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.236 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.236 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.236 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.236 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.236 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.236 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.236 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.236 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.494 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.494 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.494 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.494 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.494 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.494 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.494 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.494 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.494 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.494 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.495 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.752 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.752 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.752 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.752 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.752 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.009 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.009 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.009 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.267 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.525 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.525 05:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.525 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.525 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.525 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.525 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.525 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.526 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.783 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.784 rmmod nvme_tcp 00:08:14.784 rmmod nvme_fabrics 00:08:14.784 rmmod nvme_keyring 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1510491 ']' 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1510491 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1510491 ']' 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1510491 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1510491 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1510491' 00:08:14.784 killing process with pid 1510491 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1510491 00:08:14.784 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1510491 00:08:15.042 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:15.042 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:15.042 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:15.042 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.042 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.042 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.042 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.042 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.571 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:17.571 00:08:17.571 real 0m46.395s 00:08:17.571 user 3m30.933s 00:08:17.571 sys 0m16.377s 00:08:17.571 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.571 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:17.571 ************************************ 00:08:17.571 END TEST nvmf_ns_hotplug_stress 00:08:17.571 ************************************ 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.572 ************************************ 00:08:17.572 START TEST nvmf_delete_subsystem 00:08:17.572 ************************************ 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:17.572 * Looking for test storage... 00:08:17.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:17.572 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.474 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.474 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:19.475 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:19.475 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:19.475 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:19.475 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:08:19.475 00:08:19.475 --- 10.0.0.2 ping statistics --- 00:08:19.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.475 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:08:19.475 00:08:19.475 --- 10.0.0.1 ping statistics --- 00:08:19.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.475 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:08:19.475 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1517745 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1517745 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1517745 ']' 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.476 05:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.476 [2024-07-25 05:28:13.012660] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:08:19.476 [2024-07-25 05:28:13.012743] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.476 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.476 [2024-07-25 05:28:13.075975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:19.476 [2024-07-25 05:28:13.163546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.476 [2024-07-25 05:28:13.163606] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.476 [2024-07-25 05:28:13.163619] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.476 [2024-07-25 05:28:13.163631] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.476 [2024-07-25 05:28:13.163640] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.476 [2024-07-25 05:28:13.163696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.476 [2024-07-25 05:28:13.163701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.734 [2024-07-25 05:28:13.309133] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.734 [2024-07-25 05:28:13.325389] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.734 NULL1 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.734 Delay0 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1517773 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:19.734 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:19.734 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.734 [2024-07-25 05:28:13.400035] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:22.258 05:28:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.258 05:28:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.258 05:28:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 starting I/O failed: -6 00:08:22.258 Write completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 starting I/O failed: -6 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Write completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 starting I/O failed: -6 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Write completed with error (sct=0, sc=8) 00:08:22.258 starting I/O failed: -6 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Write completed with error (sct=0, sc=8) 00:08:22.258 Write completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 starting I/O failed: -6 00:08:22.258 Write completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Write completed with error (sct=0, sc=8) 00:08:22.258 starting I/O failed: -6 00:08:22.258 Write completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 starting I/O failed: -6 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 starting I/O failed: -6 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Write completed with error (sct=0, sc=8) 00:08:22.258 starting I/O failed: -6 00:08:22.258 Write completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 starting I/O failed: -6 00:08:22.258 Read completed with error (sct=0, sc=8) 00:08:22.258 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 [2024-07-25 05:28:15.492316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fb970 is same with the state(5) to be set 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 Read completed with error (sct=0, sc=8) 00:08:22.259 starting I/O failed: -6 00:08:22.259 Write completed with error (sct=0, sc=8) 00:08:22.260 Write completed with error (sct=0, sc=8) 00:08:22.260 starting I/O failed: -6 00:08:22.260 Write completed with error (sct=0, sc=8) 00:08:22.260 [2024-07-25 05:28:15.493900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcef4000c00 is same with the state(5) to be set 00:08:22.826 [2024-07-25 05:28:16.460515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1209a30 is same with the state(5) to be set 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 [2024-07-25 05:28:16.494737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fc4b0 is same with the state(5) to be set 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 [2024-07-25 05:28:16.495385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcef400d660 is same with the state(5) to be set 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 [2024-07-25 05:28:16.496200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fbe50 is same with the state(5) to be set 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Write completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 Read completed with error (sct=0, sc=8) 00:08:22.826 [2024-07-25 05:28:16.496728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcef400d000 is same with the state(5) to be set 00:08:22.826 Initializing NVMe Controllers 00:08:22.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:22.826 Controller IO queue size 128, less than required. 00:08:22.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:22.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:22.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:22.826 Initialization complete. Launching workers. 00:08:22.826 ======================================================== 00:08:22.826 Latency(us) 00:08:22.826 Device Information : IOPS MiB/s Average min max 00:08:22.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.33 0.08 901604.88 539.98 1011308.80 00:08:22.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.79 0.08 965332.01 537.52 2002470.61 00:08:22.826 ======================================================== 00:08:22.826 Total : 340.11 0.17 933980.12 537.52 2002470.61 00:08:22.826 00:08:22.826 [2024-07-25 05:28:16.497872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1209a30 (9): Bad file descriptor 00:08:22.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:22.826 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.826 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:22.826 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1517773 00:08:22.827 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1517773 00:08:23.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1517773) - No such process 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1517773 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1517773 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1517773 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:23.392 [2024-07-25 05:28:17.020728] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1518180 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518180 00:08:23.392 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:23.392 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.392 [2024-07-25 05:28:17.075158] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:23.957 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:23.957 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518180 00:08:23.957 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.522 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:24.522 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518180 00:08:24.522 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.087 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.088 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518180 00:08:25.088 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.345 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.345 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518180 00:08:25.345 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.909 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.909 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518180 00:08:25.910 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.475 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.475 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518180 00:08:26.475 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.733 Initializing NVMe Controllers 00:08:26.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:26.733 Controller IO queue size 128, less than required. 00:08:26.733 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:26.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:26.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:26.733 Initialization complete. Launching workers. 00:08:26.733 ======================================================== 00:08:26.733 Latency(us) 00:08:26.733 Device Information : IOPS MiB/s Average min max 00:08:26.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003470.41 1000190.40 1012883.31 00:08:26.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004993.78 1000189.10 1041845.42 00:08:26.733 ======================================================== 00:08:26.733 Total : 256.00 0.12 1004232.09 1000189.10 1041845.42 00:08:26.733 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518180 00:08:26.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1518180) - No such process 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1518180 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.991 rmmod nvme_tcp 00:08:26.991 rmmod nvme_fabrics 00:08:26.991 rmmod nvme_keyring 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1517745 ']' 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1517745 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1517745 ']' 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1517745 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1517745 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1517745' 00:08:26.991 killing process with pid 1517745 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1517745 00:08:26.991 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1517745 00:08:27.249 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.249 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.249 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.249 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.249 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.249 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.249 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.249 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.778 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.778 00:08:29.778 real 0m12.184s 00:08:29.778 user 0m27.629s 00:08:29.778 sys 0m3.001s 00:08:29.778 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.778 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.778 ************************************ 00:08:29.778 END TEST nvmf_delete_subsystem 00:08:29.778 ************************************ 00:08:29.778 05:28:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:29.778 05:28:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:29.778 05:28:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.778 05:28:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.778 ************************************ 00:08:29.778 START TEST nvmf_host_management 00:08:29.778 ************************************ 00:08:29.778 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:29.778 * Looking for test storage... 00:08:29.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.778 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.779 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.680 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:31.681 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:31.681 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:31.681 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:31.681 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.681 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:08:31.681 00:08:31.681 --- 10.0.0.2 ping statistics --- 00:08:31.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.681 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:08:31.681 00:08:31.681 --- 10.0.0.1 ping statistics --- 00:08:31.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.681 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1520521 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1520521 00:08:31.681 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1520521 ']' 00:08:31.682 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.682 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.682 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.682 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.682 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.682 [2024-07-25 05:28:25.180476] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:08:31.682 [2024-07-25 05:28:25.180568] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.682 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.682 [2024-07-25 05:28:25.249507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.682 [2024-07-25 05:28:25.344070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.682 [2024-07-25 05:28:25.344133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.682 [2024-07-25 05:28:25.344158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.682 [2024-07-25 05:28:25.344172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.682 [2024-07-25 05:28:25.344183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.682 [2024-07-25 05:28:25.344275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.682 [2024-07-25 05:28:25.344399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.682 [2024-07-25 05:28:25.344467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:31.682 [2024-07-25 05:28:25.344470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.940 [2024-07-25 05:28:25.505824] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.940 Malloc0 00:08:31.940 [2024-07-25 05:28:25.571023] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1520687 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1520687 /var/tmp/bdevperf.sock 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1520687 ']' 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:31.940 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.941 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:31.941 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.941 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:31.941 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.941 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:31.941 { 00:08:31.941 "params": { 00:08:31.941 "name": "Nvme$subsystem", 00:08:31.941 "trtype": "$TEST_TRANSPORT", 00:08:31.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.941 "adrfam": "ipv4", 00:08:31.941 "trsvcid": "$NVMF_PORT", 00:08:31.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.941 "hdgst": ${hdgst:-false}, 00:08:31.941 "ddgst": ${ddgst:-false} 00:08:31.941 }, 00:08:31.941 "method": "bdev_nvme_attach_controller" 00:08:31.941 } 00:08:31.941 EOF 00:08:31.941 )") 00:08:31.941 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:31.941 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:31.941 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:31.941 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:31.941 "params": { 00:08:31.941 "name": "Nvme0", 00:08:31.941 "trtype": "tcp", 00:08:31.941 "traddr": "10.0.0.2", 00:08:31.941 "adrfam": "ipv4", 00:08:31.941 "trsvcid": "4420", 00:08:31.941 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:31.941 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:31.941 "hdgst": false, 00:08:31.941 "ddgst": false 00:08:31.941 }, 00:08:31.941 "method": "bdev_nvme_attach_controller" 00:08:31.941 }' 00:08:32.199 [2024-07-25 05:28:25.648975] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:08:32.199 [2024-07-25 05:28:25.649057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520687 ] 00:08:32.199 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.199 [2024-07-25 05:28:25.710734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.199 [2024-07-25 05:28:25.797663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.456 Running I/O for 10 seconds... 00:08:32.456 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.456 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:32.456 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:32.456 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.456 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.456 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.456 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:32.456 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:32.456 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:32.456 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:32.456 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:32.456 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:32.457 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:32.457 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:32.457 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:32.457 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:32.457 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.457 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.457 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.457 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:32.457 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:32.457 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.749 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.749 [2024-07-25 05:28:26.398157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7d20 is same with the state(5) to be set 00:08:32.749 [2024-07-25 05:28:26.398663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.749 [2024-07-25 05:28:26.398702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.749 [2024-07-25 05:28:26.398732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.398749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.398766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.398781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.398798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.398823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.398839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.398854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.398870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.398884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.398915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.398929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.398943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.398957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.398972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.398986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.750 [2024-07-25 05:28:26.399982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.750 [2024-07-25 05:28:26.399996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.751 [2024-07-25 05:28:26.400760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.751 [2024-07-25 05:28:26.400845] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x193d420 was disconnected and freed. reset controller. 00:08:32.751 [2024-07-25 05:28:26.402016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:32.751 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.751 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:32.751 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.751 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.751 task offset: 73728 on job bdev=Nvme0n1 fails 00:08:32.751 00:08:32.751 Latency(us) 00:08:32.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.751 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:32.751 Job: Nvme0n1 ended in about 0.38 seconds with error 00:08:32.751 Verification LBA range: start 0x0 length 0x400 00:08:32.751 Nvme0n1 : 0.38 1501.26 93.83 166.81 0.00 37236.00 3665.16 35146.71 00:08:32.751 =================================================================================================================== 00:08:32.751 Total : 1501.26 93.83 166.81 0.00 37236.00 3665.16 35146.71 00:08:32.751 [2024-07-25 05:28:26.404052] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.751 [2024-07-25 05:28:26.404087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1943000 (9): Bad file descriptor 00:08:32.751 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.751 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:32.751 [2024-07-25 05:28:26.416563] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:34.127 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1520687 00:08:34.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1520687) - No such process 00:08:34.127 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:34.127 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:34.127 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:34.127 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:34.127 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:34.127 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:34.128 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:34.128 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:34.128 { 00:08:34.128 "params": { 00:08:34.128 "name": "Nvme$subsystem", 00:08:34.128 "trtype": "$TEST_TRANSPORT", 00:08:34.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:34.128 "adrfam": "ipv4", 00:08:34.128 "trsvcid": "$NVMF_PORT", 00:08:34.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:34.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:34.128 "hdgst": ${hdgst:-false}, 00:08:34.128 "ddgst": ${ddgst:-false} 00:08:34.128 }, 00:08:34.128 "method": "bdev_nvme_attach_controller" 00:08:34.128 } 00:08:34.128 EOF 00:08:34.128 )") 00:08:34.128 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:34.128 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:34.128 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:34.128 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:34.128 "params": { 00:08:34.128 "name": "Nvme0", 00:08:34.128 "trtype": "tcp", 00:08:34.128 "traddr": "10.0.0.2", 00:08:34.128 "adrfam": "ipv4", 00:08:34.128 "trsvcid": "4420", 00:08:34.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:34.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:34.128 "hdgst": false, 00:08:34.128 "ddgst": false 00:08:34.128 }, 00:08:34.128 "method": "bdev_nvme_attach_controller" 00:08:34.128 }' 00:08:34.128 [2024-07-25 05:28:27.458872] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:08:34.128 [2024-07-25 05:28:27.458958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520853 ] 00:08:34.128 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.128 [2024-07-25 05:28:27.521450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.128 [2024-07-25 05:28:27.609664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.385 Running I/O for 1 seconds... 00:08:35.317 00:08:35.317 Latency(us) 00:08:35.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.317 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:35.317 Verification LBA range: start 0x0 length 0x400 00:08:35.317 Nvme0n1 : 1.02 1572.24 98.27 0.00 0.00 40061.78 8398.32 34175.81 00:08:35.317 =================================================================================================================== 00:08:35.317 Total : 1572.24 98.27 0.00 0.00 40061.78 8398.32 34175.81 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.574 rmmod nvme_tcp 00:08:35.574 rmmod nvme_fabrics 00:08:35.574 rmmod nvme_keyring 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1520521 ']' 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1520521 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1520521 ']' 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1520521 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1520521 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1520521' 00:08:35.574 killing process with pid 1520521 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1520521 00:08:35.574 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1520521 00:08:35.832 [2024-07-25 05:28:29.380032] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:35.832 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.832 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.832 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.832 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.832 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.832 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.832 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.832 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.360 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:38.360 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:38.360 00:08:38.360 real 0m8.498s 00:08:38.360 user 0m18.931s 00:08:38.360 sys 0m2.672s 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.361 ************************************ 00:08:38.361 END TEST nvmf_host_management 00:08:38.361 ************************************ 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.361 ************************************ 00:08:38.361 START TEST nvmf_lvol 00:08:38.361 ************************************ 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:38.361 * Looking for test storage... 00:08:38.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.361 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:40.261 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:40.261 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:40.261 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.261 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:40.262 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:08:40.262 00:08:40.262 --- 10.0.0.2 ping statistics --- 00:08:40.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.262 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:08:40.262 00:08:40.262 --- 10.0.0.1 ping statistics --- 00:08:40.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.262 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1523047 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1523047 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1523047 ']' 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.262 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:40.262 [2024-07-25 05:28:33.811603] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:08:40.262 [2024-07-25 05:28:33.811702] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.262 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.262 [2024-07-25 05:28:33.880130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:40.519 [2024-07-25 05:28:33.976255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.519 [2024-07-25 05:28:33.976302] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.519 [2024-07-25 05:28:33.976326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.519 [2024-07-25 05:28:33.976338] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.519 [2024-07-25 05:28:33.976348] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.519 [2024-07-25 05:28:33.979266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.519 [2024-07-25 05:28:33.979315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.519 [2024-07-25 05:28:33.979318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.519 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.519 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:40.519 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.519 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:40.519 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:40.519 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.519 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:40.776 [2024-07-25 05:28:34.352350] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.776 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:41.033 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:41.033 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:41.291 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:41.291 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:41.549 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:41.806 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3b86323e-72c3-4f21-823b-9f0b40733990 00:08:41.806 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3b86323e-72c3-4f21-823b-9f0b40733990 lvol 20 00:08:42.064 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a41d3c71-c375-49e0-9eb5-3b000c39b880 00:08:42.064 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:42.321 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a41d3c71-c375-49e0-9eb5-3b000c39b880 00:08:42.578 05:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:42.835 [2024-07-25 05:28:36.396796] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.835 05:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.092 05:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1523477 00:08:43.092 05:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:43.092 05:28:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:43.092 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.025 05:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a41d3c71-c375-49e0-9eb5-3b000c39b880 MY_SNAPSHOT 00:08:44.282 05:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0cde6992-2fa8-4707-a45a-0ed260718a98 00:08:44.282 05:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a41d3c71-c375-49e0-9eb5-3b000c39b880 30 00:08:44.847 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0cde6992-2fa8-4707-a45a-0ed260718a98 MY_CLONE 00:08:44.847 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c3c48f72-b55e-4c10-a5e0-63559d26cfc3 00:08:44.847 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c3c48f72-b55e-4c10-a5e0-63559d26cfc3 00:08:45.781 05:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1523477 00:08:53.897 Initializing NVMe Controllers 00:08:53.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:53.897 Controller IO queue size 128, less than required. 00:08:53.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:53.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:53.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:53.897 Initialization complete. Launching workers. 00:08:53.897 ======================================================== 00:08:53.897 Latency(us) 00:08:53.897 Device Information : IOPS MiB/s Average min max 00:08:53.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9374.88 36.62 13667.86 1610.58 97337.40 00:08:53.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10892.00 42.55 11761.61 2007.89 72796.66 00:08:53.897 ======================================================== 00:08:53.897 Total : 20266.88 79.17 12643.38 1610.58 97337.40 00:08:53.897 00:08:53.898 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:53.898 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a41d3c71-c375-49e0-9eb5-3b000c39b880 00:08:53.898 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b86323e-72c3-4f21-823b-9f0b40733990 00:08:54.156 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:54.157 rmmod nvme_tcp 00:08:54.157 rmmod nvme_fabrics 00:08:54.157 rmmod nvme_keyring 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1523047 ']' 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1523047 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1523047 ']' 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1523047 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:54.157 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1523047 00:08:54.415 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:54.415 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:54.415 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1523047' 00:08:54.415 killing process with pid 1523047 00:08:54.415 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1523047 00:08:54.415 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1523047 00:08:54.673 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:54.673 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:54.673 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:54.673 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:54.673 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:54.673 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.673 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.673 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.573 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:56.573 00:08:56.573 real 0m18.677s 00:08:56.573 user 1m2.297s 00:08:56.573 sys 0m6.088s 00:08:56.573 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.573 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:56.573 ************************************ 00:08:56.573 END TEST nvmf_lvol 00:08:56.573 ************************************ 00:08:56.573 05:28:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:56.573 05:28:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:56.573 05:28:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.573 05:28:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.573 ************************************ 00:08:56.573 START TEST nvmf_lvs_grow 00:08:56.573 ************************************ 00:08:56.573 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:56.851 * Looking for test storage... 00:08:56.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:56.852 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:58.753 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:58.753 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.753 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:58.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:58.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:58.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:08:58.754 00:08:58.754 --- 10.0.0.2 ping statistics --- 00:08:58.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.754 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:08:58.754 00:08:58.754 --- 10.0.0.1 ping statistics --- 00:08:58.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.754 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1526739 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1526739 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1526739 ']' 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.754 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.754 [2024-07-25 05:28:52.413184] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:08:58.754 [2024-07-25 05:28:52.413288] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.754 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.012 [2024-07-25 05:28:52.492452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.012 [2024-07-25 05:28:52.591051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.012 [2024-07-25 05:28:52.591119] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.012 [2024-07-25 05:28:52.591144] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.012 [2024-07-25 05:28:52.591167] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.012 [2024-07-25 05:28:52.591186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.012 [2024-07-25 05:28:52.591234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.271 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.271 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:59.271 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:59.271 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.271 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.271 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.271 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:59.529 [2024-07-25 05:28:52.991604] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.529 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:59.529 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.529 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.529 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.529 ************************************ 00:08:59.529 START TEST lvs_grow_clean 00:08:59.530 ************************************ 00:08:59.530 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:59.530 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:59.530 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:59.530 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:59.530 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:59.530 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:59.530 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:59.530 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.530 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.530 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.787 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:59.787 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:00.045 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=71545339-6ab4-41d6-8aec-163c5a0e21dc 00:09:00.045 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71545339-6ab4-41d6-8aec-163c5a0e21dc 00:09:00.045 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:00.303 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:00.303 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:00.303 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 71545339-6ab4-41d6-8aec-163c5a0e21dc lvol 150 00:09:00.561 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9b3b8774-2b70-4124-a13e-505627ad8c52 00:09:00.561 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:00.561 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:00.819 [2024-07-25 05:28:54.306506] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:00.819 [2024-07-25 05:28:54.306604] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:00.819 true 00:09:00.819 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71545339-6ab4-41d6-8aec-163c5a0e21dc 00:09:00.819 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:01.077 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:01.077 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:01.334 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9b3b8774-2b70-4124-a13e-505627ad8c52 00:09:01.592 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:01.851 [2024-07-25 05:28:55.301578] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.851 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:02.110 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1527064 00:09:02.110 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:02.110 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:02.110 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1527064 /var/tmp/bdevperf.sock 00:09:02.110 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1527064 ']' 00:09:02.110 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:02.110 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.110 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:02.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:02.110 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.110 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:02.110 [2024-07-25 05:28:55.601884] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:09:02.110 [2024-07-25 05:28:55.601962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527064 ] 00:09:02.110 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.110 [2024-07-25 05:28:55.665865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.110 [2024-07-25 05:28:55.759557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.368 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.368 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:02.368 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:02.626 Nvme0n1 00:09:02.626 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:02.884 [ 00:09:02.884 { 00:09:02.884 "name": "Nvme0n1", 00:09:02.884 "aliases": [ 00:09:02.884 "9b3b8774-2b70-4124-a13e-505627ad8c52" 00:09:02.884 ], 00:09:02.884 "product_name": "NVMe disk", 00:09:02.884 "block_size": 4096, 00:09:02.884 "num_blocks": 38912, 00:09:02.884 "uuid": "9b3b8774-2b70-4124-a13e-505627ad8c52", 00:09:02.884 "assigned_rate_limits": { 00:09:02.884 "rw_ios_per_sec": 0, 00:09:02.884 "rw_mbytes_per_sec": 0, 00:09:02.884 "r_mbytes_per_sec": 0, 00:09:02.884 "w_mbytes_per_sec": 0 00:09:02.884 }, 00:09:02.884 "claimed": false, 00:09:02.884 "zoned": false, 00:09:02.884 "supported_io_types": { 00:09:02.884 "read": true, 00:09:02.884 "write": true, 00:09:02.884 "unmap": true, 00:09:02.884 "flush": true, 00:09:02.884 "reset": true, 00:09:02.884 "nvme_admin": true, 00:09:02.884 "nvme_io": true, 00:09:02.884 "nvme_io_md": false, 00:09:02.884 "write_zeroes": true, 00:09:02.884 "zcopy": false, 00:09:02.884 "get_zone_info": false, 00:09:02.884 "zone_management": false, 00:09:02.884 "zone_append": false, 00:09:02.884 "compare": true, 00:09:02.884 "compare_and_write": true, 00:09:02.884 "abort": true, 00:09:02.884 "seek_hole": false, 00:09:02.884 "seek_data": false, 00:09:02.884 "copy": true, 00:09:02.884 "nvme_iov_md": false 00:09:02.884 }, 00:09:02.884 "memory_domains": [ 00:09:02.884 { 00:09:02.884 "dma_device_id": "system", 00:09:02.884 "dma_device_type": 1 00:09:02.884 } 00:09:02.884 ], 00:09:02.884 "driver_specific": { 00:09:02.884 "nvme": [ 00:09:02.884 { 00:09:02.884 "trid": { 00:09:02.884 "trtype": "TCP", 00:09:02.884 "adrfam": "IPv4", 00:09:02.884 "traddr": "10.0.0.2", 00:09:02.884 "trsvcid": "4420", 00:09:02.884 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:02.884 }, 00:09:02.884 "ctrlr_data": { 00:09:02.884 "cntlid": 1, 00:09:02.884 "vendor_id": "0x8086", 00:09:02.884 "model_number": "SPDK bdev Controller", 00:09:02.884 "serial_number": "SPDK0", 00:09:02.884 "firmware_revision": "24.09", 00:09:02.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:02.884 "oacs": { 00:09:02.884 "security": 0, 00:09:02.884 "format": 0, 00:09:02.884 "firmware": 0, 00:09:02.884 "ns_manage": 0 00:09:02.884 }, 00:09:02.884 "multi_ctrlr": true, 00:09:02.884 "ana_reporting": false 00:09:02.884 }, 00:09:02.884 "vs": { 00:09:02.884 "nvme_version": "1.3" 00:09:02.884 }, 00:09:02.884 "ns_data": { 00:09:02.884 "id": 1, 00:09:02.884 "can_share": true 00:09:02.884 } 00:09:02.884 } 00:09:02.884 ], 00:09:02.884 "mp_policy": "active_passive" 00:09:02.884 } 00:09:02.884 } 00:09:02.884 ] 00:09:02.884 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1527199 00:09:02.884 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:02.884 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:02.884 Running I/O for 10 seconds... 00:09:04.258 Latency(us) 00:09:04.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.258 Nvme0n1 : 1.00 14306.00 55.88 0.00 0.00 0.00 0.00 0.00 00:09:04.258 =================================================================================================================== 00:09:04.258 Total : 14306.00 55.88 0.00 0.00 0.00 0.00 0.00 00:09:04.258 00:09:04.824 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 71545339-6ab4-41d6-8aec-163c5a0e21dc 00:09:05.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.083 Nvme0n1 : 2.00 14588.50 56.99 0.00 0.00 0.00 0.00 0.00 00:09:05.083 =================================================================================================================== 00:09:05.083 Total : 14588.50 56.99 0.00 0.00 0.00 0.00 0.00 00:09:05.083 00:09:05.083 true 00:09:05.083 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71545339-6ab4-41d6-8aec-163c5a0e21dc 00:09:05.083 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:05.346 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:05.346 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:05.346 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1527199 00:09:05.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.912 Nvme0n1 : 3.00 14641.00 57.19 0.00 0.00 0.00 0.00 0.00 00:09:05.912 =================================================================================================================== 00:09:05.912 Total : 14641.00 57.19 0.00 0.00 0.00 0.00 0.00 00:09:05.912 00:09:06.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.844 Nvme0n1 : 4.00 14697.50 57.41 0.00 0.00 0.00 0.00 0.00 00:09:06.844 =================================================================================================================== 00:09:06.844 Total : 14697.50 57.41 0.00 0.00 0.00 0.00 0.00 00:09:06.844 00:09:08.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.213 Nvme0n1 : 5.00 14807.60 57.84 0.00 0.00 0.00 0.00 0.00 00:09:08.213 =================================================================================================================== 00:09:08.213 Total : 14807.60 57.84 0.00 0.00 0.00 0.00 0.00 00:09:08.213 00:09:09.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.145 Nvme0n1 : 6.00 14843.00 57.98 0.00 0.00 0.00 0.00 0.00 00:09:09.145 =================================================================================================================== 00:09:09.145 Total : 14843.00 57.98 0.00 0.00 0.00 0.00 0.00 00:09:09.145 00:09:10.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.076 Nvme0n1 : 7.00 14876.29 58.11 0.00 0.00 0.00 0.00 0.00 00:09:10.076 =================================================================================================================== 00:09:10.076 Total : 14876.29 58.11 0.00 0.00 0.00 0.00 0.00 00:09:10.076 00:09:11.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.005 Nvme0n1 : 8.00 14908.38 58.24 0.00 0.00 0.00 0.00 0.00 00:09:11.005 =================================================================================================================== 00:09:11.005 Total : 14908.38 58.24 0.00 0.00 0.00 0.00 0.00 00:09:11.005 00:09:11.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.936 Nvme0n1 : 9.00 14943.67 58.37 0.00 0.00 0.00 0.00 0.00 00:09:11.936 =================================================================================================================== 00:09:11.936 Total : 14943.67 58.37 0.00 0.00 0.00 0.00 0.00 00:09:11.936 00:09:12.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.869 Nvme0n1 : 10.00 14958.10 58.43 0.00 0.00 0.00 0.00 0.00 00:09:12.869 =================================================================================================================== 00:09:12.869 Total : 14958.10 58.43 0.00 0.00 0.00 0.00 0.00 00:09:12.869 00:09:12.869 00:09:12.869 Latency(us) 00:09:12.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.869 Nvme0n1 : 10.01 14961.01 58.44 0.00 0.00 8550.54 4708.88 19903.53 00:09:12.869 =================================================================================================================== 00:09:12.869 Total : 14961.01 58.44 0.00 0.00 8550.54 4708.88 19903.53 00:09:12.869 0 00:09:12.869 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1527064 00:09:12.869 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1527064 ']' 00:09:12.869 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1527064 00:09:13.127 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:13.127 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.127 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1527064 00:09:13.127 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:13.127 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:13.127 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1527064' 00:09:13.127 killing process with pid 1527064 00:09:13.127 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1527064 00:09:13.127 Received shutdown signal, test time was about 10.000000 seconds 00:09:13.127 00:09:13.127 Latency(us) 00:09:13.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.127 =================================================================================================================== 00:09:13.127 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:13.127 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1527064 00:09:13.385 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:13.385 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:13.950 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71545339-6ab4-41d6-8aec-163c5a0e21dc 00:09:13.950 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:13.950 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:13.950 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:13.950 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:14.208 [2024-07-25 05:29:07.829924] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:14.208 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71545339-6ab4-41d6-8aec-163c5a0e21dc 00:09:14.208 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:14.208 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71545339-6ab4-41d6-8aec-163c5a0e21dc 00:09:14.208 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.208 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:14.208 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.208 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:14.208 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.208 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:14.208 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.208 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:14.208 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71545339-6ab4-41d6-8aec-163c5a0e21dc 00:09:14.466 request: 00:09:14.466 { 00:09:14.466 "uuid": "71545339-6ab4-41d6-8aec-163c5a0e21dc", 00:09:14.466 "method": "bdev_lvol_get_lvstores", 00:09:14.466 "req_id": 1 00:09:14.466 } 00:09:14.466 Got JSON-RPC error response 00:09:14.466 response: 00:09:14.466 { 00:09:14.466 "code": -19, 00:09:14.466 "message": "No such device" 00:09:14.466 } 00:09:14.466 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:14.466 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:14.466 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:14.466 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:14.467 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.724 aio_bdev 00:09:14.724 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9b3b8774-2b70-4124-a13e-505627ad8c52 00:09:14.724 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=9b3b8774-2b70-4124-a13e-505627ad8c52 00:09:14.724 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.724 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:14.724 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.724 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.724 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:14.982 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9b3b8774-2b70-4124-a13e-505627ad8c52 -t 2000 00:09:15.241 [ 00:09:15.241 { 00:09:15.241 "name": "9b3b8774-2b70-4124-a13e-505627ad8c52", 00:09:15.241 "aliases": [ 00:09:15.241 "lvs/lvol" 00:09:15.241 ], 00:09:15.241 "product_name": "Logical Volume", 00:09:15.241 "block_size": 4096, 00:09:15.241 "num_blocks": 38912, 00:09:15.241 "uuid": "9b3b8774-2b70-4124-a13e-505627ad8c52", 00:09:15.241 "assigned_rate_limits": { 00:09:15.241 "rw_ios_per_sec": 0, 00:09:15.241 "rw_mbytes_per_sec": 0, 00:09:15.241 "r_mbytes_per_sec": 0, 00:09:15.241 "w_mbytes_per_sec": 0 00:09:15.241 }, 00:09:15.241 "claimed": false, 00:09:15.241 "zoned": false, 00:09:15.241 "supported_io_types": { 00:09:15.241 "read": true, 00:09:15.241 "write": true, 00:09:15.241 "unmap": true, 00:09:15.241 "flush": false, 00:09:15.241 "reset": true, 00:09:15.241 "nvme_admin": false, 00:09:15.241 "nvme_io": false, 00:09:15.241 "nvme_io_md": false, 00:09:15.241 "write_zeroes": true, 00:09:15.241 "zcopy": false, 00:09:15.241 "get_zone_info": false, 00:09:15.241 "zone_management": false, 00:09:15.241 "zone_append": false, 00:09:15.241 "compare": false, 00:09:15.241 "compare_and_write": false, 00:09:15.241 "abort": false, 00:09:15.241 "seek_hole": true, 00:09:15.241 "seek_data": true, 00:09:15.241 "copy": false, 00:09:15.241 "nvme_iov_md": false 00:09:15.241 }, 00:09:15.241 "driver_specific": { 00:09:15.241 "lvol": { 00:09:15.241 "lvol_store_uuid": "71545339-6ab4-41d6-8aec-163c5a0e21dc", 00:09:15.241 "base_bdev": "aio_bdev", 00:09:15.241 "thin_provision": false, 00:09:15.241 "num_allocated_clusters": 38, 00:09:15.241 "snapshot": false, 00:09:15.241 "clone": false, 00:09:15.241 "esnap_clone": false 00:09:15.241 } 00:09:15.241 } 00:09:15.241 } 00:09:15.241 ] 00:09:15.241 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:15.241 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71545339-6ab4-41d6-8aec-163c5a0e21dc 00:09:15.241 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:15.499 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:15.499 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71545339-6ab4-41d6-8aec-163c5a0e21dc 00:09:15.499 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:15.757 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:15.757 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9b3b8774-2b70-4124-a13e-505627ad8c52 00:09:16.015 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 71545339-6ab4-41d6-8aec-163c5a0e21dc 00:09:16.273 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:16.532 00:09:16.532 real 0m17.097s 00:09:16.532 user 0m16.448s 00:09:16.532 sys 0m1.886s 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:16.532 ************************************ 00:09:16.532 END TEST lvs_grow_clean 00:09:16.532 ************************************ 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.532 ************************************ 00:09:16.532 START TEST lvs_grow_dirty 00:09:16.532 ************************************ 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:16.532 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:16.790 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:16.790 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:17.048 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9d271ef2-5925-4410-9127-3dd79bb59094 00:09:17.048 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d271ef2-5925-4410-9127-3dd79bb59094 00:09:17.048 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:17.306 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:17.306 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:17.306 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d271ef2-5925-4410-9127-3dd79bb59094 lvol 150 00:09:17.563 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8cc5e488-d16c-4c4a-88b0-3054f23f13df 00:09:17.563 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.564 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:17.821 [2024-07-25 05:29:11.448454] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:17.821 [2024-07-25 05:29:11.448531] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:17.821 true 00:09:17.821 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d271ef2-5925-4410-9127-3dd79bb59094 00:09:17.821 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:18.087 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:18.087 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:18.374 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8cc5e488-d16c-4c4a-88b0-3054f23f13df 00:09:18.633 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:18.891 [2024-07-25 05:29:12.427461] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.891 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:19.149 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1529239 00:09:19.149 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:19.149 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:19.149 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1529239 /var/tmp/bdevperf.sock 00:09:19.149 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1529239 ']' 00:09:19.149 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:19.149 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.149 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:19.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:19.149 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.149 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:19.149 [2024-07-25 05:29:12.723212] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:09:19.149 [2024-07-25 05:29:12.723310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529239 ] 00:09:19.149 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.149 [2024-07-25 05:29:12.783431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.408 [2024-07-25 05:29:12.874718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.408 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.408 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:19.408 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:19.973 Nvme0n1 00:09:19.973 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:20.231 [ 00:09:20.231 { 00:09:20.231 "name": "Nvme0n1", 00:09:20.231 "aliases": [ 00:09:20.231 "8cc5e488-d16c-4c4a-88b0-3054f23f13df" 00:09:20.231 ], 00:09:20.231 "product_name": "NVMe disk", 00:09:20.231 "block_size": 4096, 00:09:20.231 "num_blocks": 38912, 00:09:20.231 "uuid": "8cc5e488-d16c-4c4a-88b0-3054f23f13df", 00:09:20.231 "assigned_rate_limits": { 00:09:20.231 "rw_ios_per_sec": 0, 00:09:20.231 "rw_mbytes_per_sec": 0, 00:09:20.231 "r_mbytes_per_sec": 0, 00:09:20.231 "w_mbytes_per_sec": 0 00:09:20.231 }, 00:09:20.231 "claimed": false, 00:09:20.231 "zoned": false, 00:09:20.231 "supported_io_types": { 00:09:20.231 "read": true, 00:09:20.231 "write": true, 00:09:20.231 "unmap": true, 00:09:20.231 "flush": true, 00:09:20.231 "reset": true, 00:09:20.231 "nvme_admin": true, 00:09:20.231 "nvme_io": true, 00:09:20.231 "nvme_io_md": false, 00:09:20.231 "write_zeroes": true, 00:09:20.231 "zcopy": false, 00:09:20.231 "get_zone_info": false, 00:09:20.231 "zone_management": false, 00:09:20.231 "zone_append": false, 00:09:20.231 "compare": true, 00:09:20.231 "compare_and_write": true, 00:09:20.231 "abort": true, 00:09:20.231 "seek_hole": false, 00:09:20.231 "seek_data": false, 00:09:20.231 "copy": true, 00:09:20.231 "nvme_iov_md": false 00:09:20.231 }, 00:09:20.231 "memory_domains": [ 00:09:20.231 { 00:09:20.231 "dma_device_id": "system", 00:09:20.231 "dma_device_type": 1 00:09:20.231 } 00:09:20.231 ], 00:09:20.231 "driver_specific": { 00:09:20.231 "nvme": [ 00:09:20.231 { 00:09:20.231 "trid": { 00:09:20.231 "trtype": "TCP", 00:09:20.231 "adrfam": "IPv4", 00:09:20.231 "traddr": "10.0.0.2", 00:09:20.231 "trsvcid": "4420", 00:09:20.231 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:20.231 }, 00:09:20.231 "ctrlr_data": { 00:09:20.231 "cntlid": 1, 00:09:20.231 "vendor_id": "0x8086", 00:09:20.231 "model_number": "SPDK bdev Controller", 00:09:20.231 "serial_number": "SPDK0", 00:09:20.231 "firmware_revision": "24.09", 00:09:20.231 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:20.231 "oacs": { 00:09:20.231 "security": 0, 00:09:20.231 "format": 0, 00:09:20.231 "firmware": 0, 00:09:20.231 "ns_manage": 0 00:09:20.231 }, 00:09:20.231 "multi_ctrlr": true, 00:09:20.231 "ana_reporting": false 00:09:20.231 }, 00:09:20.231 "vs": { 00:09:20.231 "nvme_version": "1.3" 00:09:20.231 }, 00:09:20.231 "ns_data": { 00:09:20.231 "id": 1, 00:09:20.231 "can_share": true 00:09:20.231 } 00:09:20.231 } 00:09:20.231 ], 00:09:20.231 "mp_policy": "active_passive" 00:09:20.231 } 00:09:20.231 } 00:09:20.231 ] 00:09:20.231 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1529289 00:09:20.231 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:20.231 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:20.231 Running I/O for 10 seconds... 00:09:21.166 Latency(us) 00:09:21.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.166 Nvme0n1 : 1.00 14248.00 55.66 0.00 0.00 0.00 0.00 0.00 00:09:21.166 =================================================================================================================== 00:09:21.166 Total : 14248.00 55.66 0.00 0.00 0.00 0.00 0.00 00:09:21.166 00:09:22.099 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9d271ef2-5925-4410-9127-3dd79bb59094 00:09:22.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.357 Nvme0n1 : 2.00 14459.00 56.48 0.00 0.00 0.00 0.00 0.00 00:09:22.357 =================================================================================================================== 00:09:22.357 Total : 14459.00 56.48 0.00 0.00 0.00 0.00 0.00 00:09:22.357 00:09:22.357 true 00:09:22.357 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d271ef2-5925-4410-9127-3dd79bb59094 00:09:22.357 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:22.615 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:22.615 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:22.615 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1529289 00:09:23.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.180 Nvme0n1 : 3.00 14596.33 57.02 0.00 0.00 0.00 0.00 0.00 00:09:23.180 =================================================================================================================== 00:09:23.180 Total : 14596.33 57.02 0.00 0.00 0.00 0.00 0.00 00:09:23.180 00:09:24.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.554 Nvme0n1 : 4.00 14665.00 57.29 0.00 0.00 0.00 0.00 0.00 00:09:24.554 =================================================================================================================== 00:09:24.554 Total : 14665.00 57.29 0.00 0.00 0.00 0.00 0.00 00:09:24.554 00:09:25.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.488 Nvme0n1 : 5.00 14781.80 57.74 0.00 0.00 0.00 0.00 0.00 00:09:25.488 =================================================================================================================== 00:09:25.488 Total : 14781.80 57.74 0.00 0.00 0.00 0.00 0.00 00:09:25.488 00:09:26.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.423 Nvme0n1 : 6.00 14820.17 57.89 0.00 0.00 0.00 0.00 0.00 00:09:26.423 =================================================================================================================== 00:09:26.423 Total : 14820.17 57.89 0.00 0.00 0.00 0.00 0.00 00:09:26.423 00:09:27.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.358 Nvme0n1 : 7.00 14854.00 58.02 0.00 0.00 0.00 0.00 0.00 00:09:27.358 =================================================================================================================== 00:09:27.358 Total : 14854.00 58.02 0.00 0.00 0.00 0.00 0.00 00:09:27.358 00:09:28.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.293 Nvme0n1 : 8.00 14894.88 58.18 0.00 0.00 0.00 0.00 0.00 00:09:28.293 =================================================================================================================== 00:09:28.293 Total : 14894.88 58.18 0.00 0.00 0.00 0.00 0.00 00:09:28.293 00:09:29.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.227 Nvme0n1 : 9.00 14919.78 58.28 0.00 0.00 0.00 0.00 0.00 00:09:29.227 =================================================================================================================== 00:09:29.227 Total : 14919.78 58.28 0.00 0.00 0.00 0.00 0.00 00:09:29.227 00:09:30.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.161 Nvme0n1 : 10.00 14965.10 58.46 0.00 0.00 0.00 0.00 0.00 00:09:30.161 =================================================================================================================== 00:09:30.161 Total : 14965.10 58.46 0.00 0.00 0.00 0.00 0.00 00:09:30.161 00:09:30.419 00:09:30.419 Latency(us) 00:09:30.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.419 Nvme0n1 : 10.00 14971.15 58.48 0.00 0.00 8544.74 4636.07 17379.18 00:09:30.419 =================================================================================================================== 00:09:30.419 Total : 14971.15 58.48 0.00 0.00 8544.74 4636.07 17379.18 00:09:30.419 0 00:09:30.419 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1529239 00:09:30.419 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1529239 ']' 00:09:30.419 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1529239 00:09:30.419 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:30.419 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.419 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1529239 00:09:30.419 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:30.419 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:30.419 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1529239' 00:09:30.419 killing process with pid 1529239 00:09:30.419 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1529239 00:09:30.419 Received shutdown signal, test time was about 10.000000 seconds 00:09:30.419 00:09:30.419 Latency(us) 00:09:30.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.419 =================================================================================================================== 00:09:30.419 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:30.419 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1529239 00:09:30.677 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.934 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:31.192 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d271ef2-5925-4410-9127-3dd79bb59094 00:09:31.192 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:31.192 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:31.192 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:31.192 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1526739 00:09:31.192 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1526739 00:09:31.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1526739 Killed "${NVMF_APP[@]}" "$@" 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1530596 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1530596 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1530596 ']' 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.457 05:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:31.457 [2024-07-25 05:29:24.972984] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:09:31.457 [2024-07-25 05:29:24.973065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.457 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.457 [2024-07-25 05:29:25.042211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.457 [2024-07-25 05:29:25.134389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.457 [2024-07-25 05:29:25.134444] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.457 [2024-07-25 05:29:25.134458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.457 [2024-07-25 05:29:25.134470] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.457 [2024-07-25 05:29:25.134479] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.457 [2024-07-25 05:29:25.134505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.725 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.725 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:31.725 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.725 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:31.725 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:31.725 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.725 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:31.983 [2024-07-25 05:29:25.543249] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:31.983 [2024-07-25 05:29:25.543404] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:31.983 [2024-07-25 05:29:25.543453] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:31.983 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:31.983 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8cc5e488-d16c-4c4a-88b0-3054f23f13df 00:09:31.983 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=8cc5e488-d16c-4c4a-88b0-3054f23f13df 00:09:31.983 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:31.983 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:31.983 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:31.983 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:31.983 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:32.240 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8cc5e488-d16c-4c4a-88b0-3054f23f13df -t 2000 00:09:32.498 [ 00:09:32.498 { 00:09:32.498 "name": "8cc5e488-d16c-4c4a-88b0-3054f23f13df", 00:09:32.498 "aliases": [ 00:09:32.498 "lvs/lvol" 00:09:32.498 ], 00:09:32.498 "product_name": "Logical Volume", 00:09:32.498 "block_size": 4096, 00:09:32.498 "num_blocks": 38912, 00:09:32.498 "uuid": "8cc5e488-d16c-4c4a-88b0-3054f23f13df", 00:09:32.498 "assigned_rate_limits": { 00:09:32.498 "rw_ios_per_sec": 0, 00:09:32.498 "rw_mbytes_per_sec": 0, 00:09:32.498 "r_mbytes_per_sec": 0, 00:09:32.498 "w_mbytes_per_sec": 0 00:09:32.498 }, 00:09:32.498 "claimed": false, 00:09:32.498 "zoned": false, 00:09:32.498 "supported_io_types": { 00:09:32.498 "read": true, 00:09:32.498 "write": true, 00:09:32.498 "unmap": true, 00:09:32.498 "flush": false, 00:09:32.498 "reset": true, 00:09:32.498 "nvme_admin": false, 00:09:32.498 "nvme_io": false, 00:09:32.498 "nvme_io_md": false, 00:09:32.498 "write_zeroes": true, 00:09:32.498 "zcopy": false, 00:09:32.498 "get_zone_info": false, 00:09:32.498 "zone_management": false, 00:09:32.498 "zone_append": false, 00:09:32.498 "compare": false, 00:09:32.498 "compare_and_write": false, 00:09:32.498 "abort": false, 00:09:32.498 "seek_hole": true, 00:09:32.498 "seek_data": true, 00:09:32.498 "copy": false, 00:09:32.498 "nvme_iov_md": false 00:09:32.498 }, 00:09:32.498 "driver_specific": { 00:09:32.498 "lvol": { 00:09:32.498 "lvol_store_uuid": "9d271ef2-5925-4410-9127-3dd79bb59094", 00:09:32.498 "base_bdev": "aio_bdev", 00:09:32.498 "thin_provision": false, 00:09:32.498 "num_allocated_clusters": 38, 00:09:32.498 "snapshot": false, 00:09:32.498 "clone": false, 00:09:32.498 "esnap_clone": false 00:09:32.498 } 00:09:32.498 } 00:09:32.498 } 00:09:32.498 ] 00:09:32.498 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:32.498 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d271ef2-5925-4410-9127-3dd79bb59094 00:09:32.498 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:32.763 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:32.763 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d271ef2-5925-4410-9127-3dd79bb59094 00:09:32.763 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:33.025 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:33.025 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:33.281 [2024-07-25 05:29:26.832410] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:33.281 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d271ef2-5925-4410-9127-3dd79bb59094 00:09:33.281 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:33.281 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d271ef2-5925-4410-9127-3dd79bb59094 00:09:33.281 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.281 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:33.281 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.281 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:33.281 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.281 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:33.281 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.281 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:33.281 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d271ef2-5925-4410-9127-3dd79bb59094 00:09:33.539 request: 00:09:33.539 { 00:09:33.539 "uuid": "9d271ef2-5925-4410-9127-3dd79bb59094", 00:09:33.539 "method": "bdev_lvol_get_lvstores", 00:09:33.539 "req_id": 1 00:09:33.539 } 00:09:33.539 Got JSON-RPC error response 00:09:33.539 response: 00:09:33.539 { 00:09:33.539 "code": -19, 00:09:33.539 "message": "No such device" 00:09:33.539 } 00:09:33.539 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:33.539 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:33.539 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:33.539 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:33.539 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:33.796 aio_bdev 00:09:33.796 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8cc5e488-d16c-4c4a-88b0-3054f23f13df 00:09:33.796 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=8cc5e488-d16c-4c4a-88b0-3054f23f13df 00:09:33.796 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.796 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:33.796 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.796 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.796 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:34.054 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8cc5e488-d16c-4c4a-88b0-3054f23f13df -t 2000 00:09:34.312 [ 00:09:34.312 { 00:09:34.312 "name": "8cc5e488-d16c-4c4a-88b0-3054f23f13df", 00:09:34.312 "aliases": [ 00:09:34.312 "lvs/lvol" 00:09:34.312 ], 00:09:34.312 "product_name": "Logical Volume", 00:09:34.312 "block_size": 4096, 00:09:34.312 "num_blocks": 38912, 00:09:34.312 "uuid": "8cc5e488-d16c-4c4a-88b0-3054f23f13df", 00:09:34.312 "assigned_rate_limits": { 00:09:34.312 "rw_ios_per_sec": 0, 00:09:34.312 "rw_mbytes_per_sec": 0, 00:09:34.312 "r_mbytes_per_sec": 0, 00:09:34.312 "w_mbytes_per_sec": 0 00:09:34.312 }, 00:09:34.312 "claimed": false, 00:09:34.312 "zoned": false, 00:09:34.312 "supported_io_types": { 00:09:34.312 "read": true, 00:09:34.312 "write": true, 00:09:34.312 "unmap": true, 00:09:34.312 "flush": false, 00:09:34.312 "reset": true, 00:09:34.312 "nvme_admin": false, 00:09:34.312 "nvme_io": false, 00:09:34.312 "nvme_io_md": false, 00:09:34.312 "write_zeroes": true, 00:09:34.312 "zcopy": false, 00:09:34.312 "get_zone_info": false, 00:09:34.312 "zone_management": false, 00:09:34.312 "zone_append": false, 00:09:34.312 "compare": false, 00:09:34.312 "compare_and_write": false, 00:09:34.312 "abort": false, 00:09:34.312 "seek_hole": true, 00:09:34.312 "seek_data": true, 00:09:34.312 "copy": false, 00:09:34.312 "nvme_iov_md": false 00:09:34.312 }, 00:09:34.312 "driver_specific": { 00:09:34.312 "lvol": { 00:09:34.312 "lvol_store_uuid": "9d271ef2-5925-4410-9127-3dd79bb59094", 00:09:34.312 "base_bdev": "aio_bdev", 00:09:34.312 "thin_provision": false, 00:09:34.312 "num_allocated_clusters": 38, 00:09:34.312 "snapshot": false, 00:09:34.312 "clone": false, 00:09:34.312 "esnap_clone": false 00:09:34.312 } 00:09:34.312 } 00:09:34.312 } 00:09:34.312 ] 00:09:34.312 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:34.312 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d271ef2-5925-4410-9127-3dd79bb59094 00:09:34.312 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:34.571 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:34.571 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d271ef2-5925-4410-9127-3dd79bb59094 00:09:34.571 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:34.829 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:34.829 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8cc5e488-d16c-4c4a-88b0-3054f23f13df 00:09:35.087 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d271ef2-5925-4410-9127-3dd79bb59094 00:09:35.346 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:35.604 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:35.604 00:09:35.604 real 0m19.078s 00:09:35.604 user 0m47.892s 00:09:35.604 sys 0m4.618s 00:09:35.604 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.604 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:35.604 ************************************ 00:09:35.604 END TEST lvs_grow_dirty 00:09:35.604 ************************************ 00:09:35.604 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:35.604 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:35.604 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:35.604 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:35.604 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:35.604 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:35.604 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:35.604 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:35.604 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:35.604 nvmf_trace.0 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:35.862 rmmod nvme_tcp 00:09:35.862 rmmod nvme_fabrics 00:09:35.862 rmmod nvme_keyring 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1530596 ']' 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1530596 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1530596 ']' 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1530596 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1530596 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1530596' 00:09:35.862 killing process with pid 1530596 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1530596 00:09:35.862 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1530596 00:09:36.121 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.121 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.121 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.121 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.121 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.121 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.121 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.121 05:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.021 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:38.021 00:09:38.021 real 0m41.455s 00:09:38.021 user 1m10.214s 00:09:38.021 sys 0m8.302s 00:09:38.021 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.021 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.021 ************************************ 00:09:38.021 END TEST nvmf_lvs_grow 00:09:38.021 ************************************ 00:09:38.021 05:29:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:38.021 05:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.021 05:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.021 05:29:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.280 ************************************ 00:09:38.280 START TEST nvmf_bdev_io_wait 00:09:38.280 ************************************ 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:38.280 * Looking for test storage... 00:09:38.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:38.280 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.179 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:40.180 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:40.180 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:40.180 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:40.180 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.180 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:40.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:09:40.438 00:09:40.438 --- 10.0.0.2 ping statistics --- 00:09:40.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.438 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:09:40.438 00:09:40.438 --- 10.0.0.1 ping statistics --- 00:09:40.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.438 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1533127 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1533127 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1533127 ']' 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.438 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.438 [2024-07-25 05:29:34.033345] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:09:40.438 [2024-07-25 05:29:34.033438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.438 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.438 [2024-07-25 05:29:34.102322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.696 [2024-07-25 05:29:34.194614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.696 [2024-07-25 05:29:34.194676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.696 [2024-07-25 05:29:34.194704] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.696 [2024-07-25 05:29:34.194716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.696 [2024-07-25 05:29:34.194726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.696 [2024-07-25 05:29:34.194821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.696 [2024-07-25 05:29:34.194850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.696 [2024-07-25 05:29:34.194906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.696 [2024-07-25 05:29:34.194908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.696 [2024-07-25 05:29:34.359614] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.696 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.955 Malloc0 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.955 [2024-07-25 05:29:34.426649] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1533273 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1533274 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1533277 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:40.955 { 00:09:40.955 "params": { 00:09:40.955 "name": "Nvme$subsystem", 00:09:40.955 "trtype": "$TEST_TRANSPORT", 00:09:40.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.955 "adrfam": "ipv4", 00:09:40.955 "trsvcid": "$NVMF_PORT", 00:09:40.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.955 "hdgst": ${hdgst:-false}, 00:09:40.955 "ddgst": ${ddgst:-false} 00:09:40.955 }, 00:09:40.955 "method": "bdev_nvme_attach_controller" 00:09:40.955 } 00:09:40.955 EOF 00:09:40.955 )") 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1533279 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:40.955 { 00:09:40.955 "params": { 00:09:40.955 "name": "Nvme$subsystem", 00:09:40.955 "trtype": "$TEST_TRANSPORT", 00:09:40.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.955 "adrfam": "ipv4", 00:09:40.955 "trsvcid": "$NVMF_PORT", 00:09:40.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.955 "hdgst": ${hdgst:-false}, 00:09:40.955 "ddgst": ${ddgst:-false} 00:09:40.955 }, 00:09:40.955 "method": "bdev_nvme_attach_controller" 00:09:40.955 } 00:09:40.955 EOF 00:09:40.955 )") 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:40.955 { 00:09:40.955 "params": { 00:09:40.955 "name": "Nvme$subsystem", 00:09:40.955 "trtype": "$TEST_TRANSPORT", 00:09:40.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.955 "adrfam": "ipv4", 00:09:40.955 "trsvcid": "$NVMF_PORT", 00:09:40.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.955 "hdgst": ${hdgst:-false}, 00:09:40.955 "ddgst": ${ddgst:-false} 00:09:40.955 }, 00:09:40.955 "method": "bdev_nvme_attach_controller" 00:09:40.955 } 00:09:40.955 EOF 00:09:40.955 )") 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:40.955 { 00:09:40.955 "params": { 00:09:40.955 "name": "Nvme$subsystem", 00:09:40.955 "trtype": "$TEST_TRANSPORT", 00:09:40.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.955 "adrfam": "ipv4", 00:09:40.955 "trsvcid": "$NVMF_PORT", 00:09:40.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.955 "hdgst": ${hdgst:-false}, 00:09:40.955 "ddgst": ${ddgst:-false} 00:09:40.955 }, 00:09:40.955 "method": "bdev_nvme_attach_controller" 00:09:40.955 } 00:09:40.955 EOF 00:09:40.955 )") 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1533273 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:40.955 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:40.955 "params": { 00:09:40.956 "name": "Nvme1", 00:09:40.956 "trtype": "tcp", 00:09:40.956 "traddr": "10.0.0.2", 00:09:40.956 "adrfam": "ipv4", 00:09:40.956 "trsvcid": "4420", 00:09:40.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.956 "hdgst": false, 00:09:40.956 "ddgst": false 00:09:40.956 }, 00:09:40.956 "method": "bdev_nvme_attach_controller" 00:09:40.956 }' 00:09:40.956 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:40.956 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:40.956 "params": { 00:09:40.956 "name": "Nvme1", 00:09:40.956 "trtype": "tcp", 00:09:40.956 "traddr": "10.0.0.2", 00:09:40.956 "adrfam": "ipv4", 00:09:40.956 "trsvcid": "4420", 00:09:40.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.956 "hdgst": false, 00:09:40.956 "ddgst": false 00:09:40.956 }, 00:09:40.956 "method": "bdev_nvme_attach_controller" 00:09:40.956 }' 00:09:40.956 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:40.956 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:40.956 "params": { 00:09:40.956 "name": "Nvme1", 00:09:40.956 "trtype": "tcp", 00:09:40.956 "traddr": "10.0.0.2", 00:09:40.956 "adrfam": "ipv4", 00:09:40.956 "trsvcid": "4420", 00:09:40.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.956 "hdgst": false, 00:09:40.956 "ddgst": false 00:09:40.956 }, 00:09:40.956 "method": "bdev_nvme_attach_controller" 00:09:40.956 }' 00:09:40.956 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:40.956 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:40.956 "params": { 00:09:40.956 "name": "Nvme1", 00:09:40.956 "trtype": "tcp", 00:09:40.956 "traddr": "10.0.0.2", 00:09:40.956 "adrfam": "ipv4", 00:09:40.956 "trsvcid": "4420", 00:09:40.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.956 "hdgst": false, 00:09:40.956 "ddgst": false 00:09:40.956 }, 00:09:40.956 "method": "bdev_nvme_attach_controller" 00:09:40.956 }' 00:09:40.956 [2024-07-25 05:29:34.474036] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:09:40.956 [2024-07-25 05:29:34.474037] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:09:40.956 [2024-07-25 05:29:34.474037] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:09:40.956 [2024-07-25 05:29:34.474128] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 05:29:34.474128] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 05:29:34.474129] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:40.956 --proc-type=auto ] 00:09:40.956 --proc-type=auto ] 00:09:40.956 [2024-07-25 05:29:34.474817] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:09:40.956 [2024-07-25 05:29:34.474891] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:40.956 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.956 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.956 [2024-07-25 05:29:34.645857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.214 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.214 [2024-07-25 05:29:34.720815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:41.214 [2024-07-25 05:29:34.744732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.214 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.214 [2024-07-25 05:29:34.820444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:41.214 [2024-07-25 05:29:34.866875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.472 [2024-07-25 05:29:34.922989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.472 [2024-07-25 05:29:34.946854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:41.472 [2024-07-25 05:29:34.992664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:41.472 Running I/O for 1 seconds... 00:09:41.472 Running I/O for 1 seconds... 00:09:41.729 Running I/O for 1 seconds... 00:09:41.730 Running I/O for 1 seconds... 00:09:42.663 00:09:42.663 Latency(us) 00:09:42.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.663 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:42.663 Nvme1n1 : 1.01 9966.35 38.93 0.00 0.00 12784.84 8495.41 19709.35 00:09:42.663 =================================================================================================================== 00:09:42.663 Total : 9966.35 38.93 0.00 0.00 12784.84 8495.41 19709.35 00:09:42.663 00:09:42.663 Latency(us) 00:09:42.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.663 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:42.663 Nvme1n1 : 1.01 9629.42 37.61 0.00 0.00 13239.27 6941.96 24078.41 00:09:42.663 =================================================================================================================== 00:09:42.663 Total : 9629.42 37.61 0.00 0.00 13239.27 6941.96 24078.41 00:09:42.663 00:09:42.663 Latency(us) 00:09:42.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.663 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:42.663 Nvme1n1 : 1.01 8726.78 34.09 0.00 0.00 14594.41 4393.34 22622.06 00:09:42.663 =================================================================================================================== 00:09:42.663 Total : 8726.78 34.09 0.00 0.00 14594.41 4393.34 22622.06 00:09:42.663 00:09:42.663 Latency(us) 00:09:42.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.663 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:42.663 Nvme1n1 : 1.00 182985.97 714.79 0.00 0.00 696.77 277.62 964.84 00:09:42.663 =================================================================================================================== 00:09:42.663 Total : 182985.97 714.79 0.00 0.00 696.77 277.62 964.84 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1533274 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1533277 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1533279 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:42.921 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:42.921 rmmod nvme_tcp 00:09:42.921 rmmod nvme_fabrics 00:09:42.921 rmmod nvme_keyring 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1533127 ']' 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1533127 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1533127 ']' 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1533127 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1533127 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1533127' 00:09:43.178 killing process with pid 1533127 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1533127 00:09:43.178 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1533127 00:09:43.435 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:43.435 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:43.435 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:43.435 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:43.435 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:43.435 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.435 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.435 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.375 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:45.375 00:09:45.375 real 0m7.206s 00:09:45.375 user 0m15.985s 00:09:45.375 sys 0m3.710s 00:09:45.375 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.375 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.375 ************************************ 00:09:45.375 END TEST nvmf_bdev_io_wait 00:09:45.375 ************************************ 00:09:45.375 05:29:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:45.375 05:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:45.375 05:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.375 05:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.375 ************************************ 00:09:45.375 START TEST nvmf_queue_depth 00:09:45.375 ************************************ 00:09:45.375 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:45.375 * Looking for test storage... 00:09:45.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:45.375 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:47.907 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:47.907 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:47.907 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:47.907 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.907 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:47.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:09:47.908 00:09:47.908 --- 10.0.0.2 ping statistics --- 00:09:47.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.908 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:09:47.908 00:09:47.908 --- 10.0.0.1 ping statistics --- 00:09:47.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.908 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1535499 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1535499 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1535499 ']' 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.908 [2024-07-25 05:29:41.289548] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:09:47.908 [2024-07-25 05:29:41.289646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.908 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.908 [2024-07-25 05:29:41.357745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.908 [2024-07-25 05:29:41.443671] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.908 [2024-07-25 05:29:41.443723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.908 [2024-07-25 05:29:41.443744] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.908 [2024-07-25 05:29:41.443763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.908 [2024-07-25 05:29:41.443778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.908 [2024-07-25 05:29:41.443811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.908 [2024-07-25 05:29:41.587174] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.908 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.168 Malloc0 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.168 [2024-07-25 05:29:41.645632] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1535524 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1535524 /var/tmp/bdevperf.sock 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1535524 ']' 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:48.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.168 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.168 [2024-07-25 05:29:41.691575] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:09:48.168 [2024-07-25 05:29:41.691641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535524 ] 00:09:48.168 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.168 [2024-07-25 05:29:41.754397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.168 [2024-07-25 05:29:41.845637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.426 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.426 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:48.426 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:48.426 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.426 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.684 NVMe0n1 00:09:48.684 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.684 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:48.684 Running I/O for 10 seconds... 00:10:00.882 00:10:00.882 Latency(us) 00:10:00.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.882 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:00.882 Verification LBA range: start 0x0 length 0x4000 00:10:00.882 NVMe0n1 : 10.07 8546.59 33.39 0.00 0.00 119247.23 16893.72 73788.68 00:10:00.882 =================================================================================================================== 00:10:00.882 Total : 8546.59 33.39 0.00 0.00 119247.23 16893.72 73788.68 00:10:00.882 0 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1535524 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1535524 ']' 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1535524 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1535524 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1535524' 00:10:00.882 killing process with pid 1535524 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1535524 00:10:00.882 Received shutdown signal, test time was about 10.000000 seconds 00:10:00.882 00:10:00.882 Latency(us) 00:10:00.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.882 =================================================================================================================== 00:10:00.882 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1535524 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:00.882 rmmod nvme_tcp 00:10:00.882 rmmod nvme_fabrics 00:10:00.882 rmmod nvme_keyring 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1535499 ']' 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1535499 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1535499 ']' 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1535499 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:00.882 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:00.883 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1535499 00:10:00.883 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:00.883 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:00.883 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1535499' 00:10:00.883 killing process with pid 1535499 00:10:00.883 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1535499 00:10:00.883 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1535499 00:10:00.883 05:29:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:00.883 05:29:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:00.883 05:29:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:00.883 05:29:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:00.883 05:29:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:00.883 05:29:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.883 05:29:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.883 05:29:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.447 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:01.447 00:10:01.447 real 0m16.104s 00:10:01.447 user 0m22.750s 00:10:01.447 sys 0m3.005s 00:10:01.447 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.447 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.447 ************************************ 00:10:01.447 END TEST nvmf_queue_depth 00:10:01.447 ************************************ 00:10:01.447 05:29:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:01.447 05:29:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:01.447 05:29:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.447 05:29:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.447 ************************************ 00:10:01.447 START TEST nvmf_target_multipath 00:10:01.447 ************************************ 00:10:01.447 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:01.705 * Looking for test storage... 00:10:01.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.705 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:01.706 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:03.604 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:03.604 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:03.605 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:03.605 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:03.605 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:03.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:10:03.605 00:10:03.605 --- 10.0.0.2 ping statistics --- 00:10:03.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.605 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:10:03.605 00:10:03.605 --- 10.0.0.1 ping statistics --- 00:10:03.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.605 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:03.605 only one NIC for nvmf test 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:03.605 rmmod nvme_tcp 00:10:03.605 rmmod nvme_fabrics 00:10:03.605 rmmod nvme_keyring 00:10:03.605 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.864 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:03.864 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:03.864 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:03.864 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.864 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:03.864 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:03.864 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:03.864 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:03.864 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.864 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.864 05:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:05.762 00:10:05.762 real 0m4.240s 00:10:05.762 user 0m0.777s 00:10:05.762 sys 0m1.443s 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:05.762 ************************************ 00:10:05.762 END TEST nvmf_target_multipath 00:10:05.762 ************************************ 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.762 ************************************ 00:10:05.762 START TEST nvmf_zcopy 00:10:05.762 ************************************ 00:10:05.762 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:05.762 * Looking for test storage... 00:10:06.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:06.021 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:07.919 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:07.919 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:07.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:07.919 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:07.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:10:07.919 00:10:07.919 --- 10.0.0.2 ping statistics --- 00:10:07.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.919 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:10:07.919 00:10:07.919 --- 10.0.0.1 ping statistics --- 00:10:07.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.919 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.919 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1540799 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1540799 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1540799 ']' 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.920 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.920 [2024-07-25 05:30:01.599331] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:10:07.920 [2024-07-25 05:30:01.599425] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.178 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.178 [2024-07-25 05:30:01.663798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.179 [2024-07-25 05:30:01.753923] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.179 [2024-07-25 05:30:01.753986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.179 [2024-07-25 05:30:01.754001] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.179 [2024-07-25 05:30:01.754012] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.179 [2024-07-25 05:30:01.754021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.179 [2024-07-25 05:30:01.754051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.179 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.179 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:08.179 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:08.179 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:08.179 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.437 [2024-07-25 05:30:01.895726] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.437 [2024-07-25 05:30:01.911953] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.437 malloc0 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:08.437 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:08.438 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:08.438 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:08.438 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:08.438 { 00:10:08.438 "params": { 00:10:08.438 "name": "Nvme$subsystem", 00:10:08.438 "trtype": "$TEST_TRANSPORT", 00:10:08.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.438 "adrfam": "ipv4", 00:10:08.438 "trsvcid": "$NVMF_PORT", 00:10:08.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.438 "hdgst": ${hdgst:-false}, 00:10:08.438 "ddgst": ${ddgst:-false} 00:10:08.438 }, 00:10:08.438 "method": "bdev_nvme_attach_controller" 00:10:08.438 } 00:10:08.438 EOF 00:10:08.438 )") 00:10:08.438 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:08.438 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:08.438 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:08.438 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:08.438 "params": { 00:10:08.438 "name": "Nvme1", 00:10:08.438 "trtype": "tcp", 00:10:08.438 "traddr": "10.0.0.2", 00:10:08.438 "adrfam": "ipv4", 00:10:08.438 "trsvcid": "4420", 00:10:08.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.438 "hdgst": false, 00:10:08.438 "ddgst": false 00:10:08.438 }, 00:10:08.438 "method": "bdev_nvme_attach_controller" 00:10:08.438 }' 00:10:08.438 [2024-07-25 05:30:01.996632] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:10:08.438 [2024-07-25 05:30:01.996729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540844 ] 00:10:08.438 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.438 [2024-07-25 05:30:02.058359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.696 [2024-07-25 05:30:02.155368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.696 Running I/O for 10 seconds... 00:10:20.893 00:10:20.893 Latency(us) 00:10:20.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.893 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:20.893 Verification LBA range: start 0x0 length 0x1000 00:10:20.893 Nvme1n1 : 10.02 5630.91 43.99 0.00 0.00 22668.76 2949.12 31651.46 00:10:20.893 =================================================================================================================== 00:10:20.893 Total : 5630.91 43.99 0.00 0.00 22668.76 2949.12 31651.46 00:10:20.893 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1542658 00:10:20.893 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:20.893 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.893 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:20.893 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:20.893 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:20.893 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:20.893 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:20.893 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:20.893 { 00:10:20.893 "params": { 00:10:20.893 "name": "Nvme$subsystem", 00:10:20.893 "trtype": "$TEST_TRANSPORT", 00:10:20.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:20.893 "adrfam": "ipv4", 00:10:20.893 "trsvcid": "$NVMF_PORT", 00:10:20.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:20.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:20.893 "hdgst": ${hdgst:-false}, 00:10:20.893 "ddgst": ${ddgst:-false} 00:10:20.893 }, 00:10:20.893 "method": "bdev_nvme_attach_controller" 00:10:20.893 } 00:10:20.893 EOF 00:10:20.893 )") 00:10:20.893 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:20.893 [2024-07-25 05:30:12.656182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.656229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:20.893 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:20.893 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:20.893 "params": { 00:10:20.893 "name": "Nvme1", 00:10:20.893 "trtype": "tcp", 00:10:20.893 "traddr": "10.0.0.2", 00:10:20.893 "adrfam": "ipv4", 00:10:20.893 "trsvcid": "4420", 00:10:20.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:20.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:20.893 "hdgst": false, 00:10:20.893 "ddgst": false 00:10:20.893 }, 00:10:20.893 "method": "bdev_nvme_attach_controller" 00:10:20.893 }' 00:10:20.893 [2024-07-25 05:30:12.664136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.664163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.672152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.672176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.680168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.680190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.688186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.688207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.692910] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:10:20.893 [2024-07-25 05:30:12.692969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542658 ] 00:10:20.893 [2024-07-25 05:30:12.696208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.696251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.704251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.704273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.712276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.712312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.720309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.720331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.893 [2024-07-25 05:30:12.728315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.728337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.736355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.736377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.744360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.744382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.752381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.752402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.756586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.893 [2024-07-25 05:30:12.760406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.760429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.768455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.768491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.776453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.776478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.784468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.784489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.792490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.792511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.893 [2024-07-25 05:30:12.800513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.893 [2024-07-25 05:30:12.800548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.808562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.808602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.816616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.816667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.824610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.824637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.832621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.832647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.840645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.840671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.848663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.848688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.854677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.894 [2024-07-25 05:30:12.856698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.856723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.864722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.864747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.872773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.872811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.880803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.880843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.888824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.888864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.896847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.896888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.904872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.904914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.912893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.912933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.920920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.920962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.928903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.928929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.936957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.936997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.944984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.945025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.952991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.953026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.960989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.961025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.969009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.969034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.977030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.977055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.985063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.985093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:12.993084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:12.993112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.001107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.001134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.009126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.009153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.017146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.017171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.025170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.025195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.033194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.033219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.041218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.041249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.049258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.049286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.057292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.057317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.065310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.065333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.073329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.073351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.081346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.081368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.089366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.089388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.097391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.097413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.105413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.105437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.113435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.113462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.121455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.121477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.129481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.129504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.137501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.137538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.145545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.145568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.153563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.153602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.161598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.161623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.169624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.169648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.177647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.177672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.185688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.894 [2024-07-25 05:30:13.185714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.894 [2024-07-25 05:30:13.193713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.193738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.201740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.201771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.209754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.209779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 Running I/O for 5 seconds... 00:10:20.895 [2024-07-25 05:30:13.217775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.217801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.232202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.232237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.244009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.244040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.255712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.255745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.267507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.267552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.279218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.279257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.290690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.290729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.302425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.302454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.314195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.314226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.326292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.326320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.339854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.339886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.351047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.351079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.362489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.362517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.374468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.374496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.386073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.386103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.397720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.397751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.409235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.409286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.420732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.420763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.432368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.432396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.443975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.444006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.455187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.455218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.466876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.466907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.478744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.478775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.490467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.490495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.501847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.501878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.513433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.513461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.524893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.524924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.536783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.536813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.548470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.548497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.560265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.560308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.571779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.571809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.583357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.583385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.594867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.594898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.605983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.606017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.617486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.617525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.629886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.629918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.641666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.641697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.653981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.654012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.665674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.665706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.677789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.677821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.689631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.689663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.700901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.700932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.712208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.712251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.723890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.723924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.735713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.735745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.747127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.747159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.758676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.758708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.769933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.769975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.781588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.895 [2024-07-25 05:30:13.781620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.895 [2024-07-25 05:30:13.793054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.793085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.804258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.804314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.815880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.815911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.826952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.826983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.838206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.838236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.849911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.849942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.861925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.861956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.873487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.873514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.884939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.884970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.896321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.896348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.907891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.907922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.919389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.919417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.930820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.930851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.944349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.944377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.955254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.955299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.966743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.966774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.980709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.980740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:13.992295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:13.992323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.004095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.004127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.016087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.016118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.028002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.028033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.039661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.039692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.051738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.051768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.063221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.063263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.074713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.074745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.086085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.086117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.097660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.097692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.109440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.109468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.120908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.120939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.132562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.132593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.144286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.144323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.155849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.155880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.167261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.167317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.178896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.178927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.190901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.190932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.202252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.202298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.213646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.213678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.225444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.225472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.237222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.237261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.248806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.248837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.260335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.260362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.272048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.272078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.283622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.283652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.295293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.295321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.307214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.307254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.318985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.319015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.330655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.330686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.342181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.342212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.353872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.353902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.365462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.365491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.377261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.896 [2024-07-25 05:30:14.377305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.896 [2024-07-25 05:30:14.388944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.388988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.402541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.402572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.412910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.412941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.425221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.425260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.436390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.436417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.448232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.448271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.459787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.459818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.471402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.471430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.482661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.482692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.493765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.493796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.505708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.505740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.517382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.517410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.528552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.528583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.540439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.540467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.551704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.551735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.563205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.563236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.575087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.575117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.897 [2024-07-25 05:30:14.586855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.897 [2024-07-25 05:30:14.586885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.155 [2024-07-25 05:30:14.598332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.155 [2024-07-25 05:30:14.598361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.155 [2024-07-25 05:30:14.611856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.155 [2024-07-25 05:30:14.611897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.155 [2024-07-25 05:30:14.622634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.155 [2024-07-25 05:30:14.622666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.155 [2024-07-25 05:30:14.634219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.155 [2024-07-25 05:30:14.634257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.155 [2024-07-25 05:30:14.645673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.645704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.657603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.657635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.668899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.668930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.680512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.680540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.692054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.692085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.703725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.703756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.714880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.714911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.726511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.726539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.738082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.738113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.749796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.749827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.761619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.761650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.773324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.773352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.784512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.784539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.796061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.796091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.807431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.807459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.818913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.818944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.830376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.830413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.841969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.842000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.156 [2024-07-25 05:30:14.853975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.156 [2024-07-25 05:30:14.854006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.414 [2024-07-25 05:30:14.865762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.414 [2024-07-25 05:30:14.865794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.414 [2024-07-25 05:30:14.877849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.414 [2024-07-25 05:30:14.877879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.414 [2024-07-25 05:30:14.889473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.414 [2024-07-25 05:30:14.889501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.414 [2024-07-25 05:30:14.900955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.414 [2024-07-25 05:30:14.900985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.414 [2024-07-25 05:30:14.912714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.414 [2024-07-25 05:30:14.912746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.414 [2024-07-25 05:30:14.925299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.414 [2024-07-25 05:30:14.925327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.414 [2024-07-25 05:30:14.936748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.414 [2024-07-25 05:30:14.936791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.414 [2024-07-25 05:30:14.948347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.414 [2024-07-25 05:30:14.948375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.414 [2024-07-25 05:30:14.961802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.414 [2024-07-25 05:30:14.961833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.414 [2024-07-25 05:30:14.972917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.414 [2024-07-25 05:30:14.972948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.414 [2024-07-25 05:30:14.984562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.414 [2024-07-25 05:30:14.984608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.414 [2024-07-25 05:30:14.997999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.414 [2024-07-25 05:30:14.998030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.414 [2024-07-25 05:30:15.008573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.415 [2024-07-25 05:30:15.008604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.415 [2024-07-25 05:30:15.020301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.415 [2024-07-25 05:30:15.020344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.415 [2024-07-25 05:30:15.031387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.415 [2024-07-25 05:30:15.031416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.415 [2024-07-25 05:30:15.042675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.415 [2024-07-25 05:30:15.042707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.415 [2024-07-25 05:30:15.054299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.415 [2024-07-25 05:30:15.054335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.415 [2024-07-25 05:30:15.065875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.415 [2024-07-25 05:30:15.065906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.415 [2024-07-25 05:30:15.079319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.415 [2024-07-25 05:30:15.079346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.415 [2024-07-25 05:30:15.090310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.415 [2024-07-25 05:30:15.090338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.415 [2024-07-25 05:30:15.101898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.415 [2024-07-25 05:30:15.101929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.415 [2024-07-25 05:30:15.113212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.415 [2024-07-25 05:30:15.113252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.124843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.124875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.136252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.136297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.147840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.147871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.159050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.159080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.170264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.170311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.181973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.182005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.193955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.193986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.205535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.205580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.216593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.216625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.228181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.228212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.239849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.239879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.251778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.251810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.263039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.263069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.274514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.274558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.285856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.285886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.297661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.673 [2024-07-25 05:30:15.297692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.673 [2024-07-25 05:30:15.309430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.674 [2024-07-25 05:30:15.309458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.674 [2024-07-25 05:30:15.321012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.674 [2024-07-25 05:30:15.321043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.674 [2024-07-25 05:30:15.334504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.674 [2024-07-25 05:30:15.334532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.674 [2024-07-25 05:30:15.345145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.674 [2024-07-25 05:30:15.345175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.674 [2024-07-25 05:30:15.356622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.674 [2024-07-25 05:30:15.356653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.674 [2024-07-25 05:30:15.368079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.674 [2024-07-25 05:30:15.368111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.381509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.381555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.392464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.392492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.404264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.404308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.416035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.416067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.429564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.429596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.440479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.440508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.452144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.452175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.463425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.463453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.475024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.475055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.486832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.486863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.498237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.498292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.509789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.509820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.521371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.521400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.533133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.533163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.544630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.544662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.556214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.556259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.569769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.569800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.580510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.580554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.592291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.592318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.604229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.604267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.616079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.616110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.932 [2024-07-25 05:30:15.627374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.932 [2024-07-25 05:30:15.627403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.639189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.639220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.650963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.650995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.662477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.662505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.673618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.673648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.684936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.684967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.696710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.696742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.708356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.708384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.719973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.720005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.731440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.731469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.742892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.742923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.754860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.754891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.766495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.766523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.778069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.778100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.789687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.789719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.801302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.801330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.813010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.813042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.824950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.824982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.836631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.836661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.847806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.847837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.859193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.859224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.872410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.872439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.191 [2024-07-25 05:30:15.882879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.191 [2024-07-25 05:30:15.882911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:15.894908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:15.894940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:15.906129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:15.906161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:15.917895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:15.917927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:15.929470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:15.929509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:15.941008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:15.941040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:15.952901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:15.952932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:15.964643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:15.964675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:15.976364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:15.976393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:15.987729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:15.987761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:15.999812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:15.999844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:16.011486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:16.011514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:16.023052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:16.023083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:16.036426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:16.036454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:16.046386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:16.046415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:16.058705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:16.058737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:16.070340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:16.070368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:16.082040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:16.082072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:16.093798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:16.093830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:16.105324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:16.105352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:16.116717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:16.116749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:16.128556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:16.128602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.450 [2024-07-25 05:30:16.139940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.450 [2024-07-25 05:30:16.139971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.708 [2024-07-25 05:30:16.153365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.708 [2024-07-25 05:30:16.153402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.708 [2024-07-25 05:30:16.164122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.708 [2024-07-25 05:30:16.164153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.708 [2024-07-25 05:30:16.175965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.708 [2024-07-25 05:30:16.175996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.708 [2024-07-25 05:30:16.187595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.708 [2024-07-25 05:30:16.187627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.708 [2024-07-25 05:30:16.199043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.708 [2024-07-25 05:30:16.199074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.708 [2024-07-25 05:30:16.210438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.708 [2024-07-25 05:30:16.210466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.708 [2024-07-25 05:30:16.221747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.708 [2024-07-25 05:30:16.221778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.708 [2024-07-25 05:30:16.233638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.708 [2024-07-25 05:30:16.233669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.708 [2024-07-25 05:30:16.244964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.708 [2024-07-25 05:30:16.244994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.708 [2024-07-25 05:30:16.256590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.708 [2024-07-25 05:30:16.256621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.709 [2024-07-25 05:30:16.268402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.709 [2024-07-25 05:30:16.268430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.709 [2024-07-25 05:30:16.280455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.709 [2024-07-25 05:30:16.280484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.709 [2024-07-25 05:30:16.291723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.709 [2024-07-25 05:30:16.291754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.709 [2024-07-25 05:30:16.303597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.709 [2024-07-25 05:30:16.303630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.709 [2024-07-25 05:30:16.315635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.709 [2024-07-25 05:30:16.315667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.709 [2024-07-25 05:30:16.327026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.709 [2024-07-25 05:30:16.327057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.709 [2024-07-25 05:30:16.338196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.709 [2024-07-25 05:30:16.338227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.709 [2024-07-25 05:30:16.349409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.709 [2024-07-25 05:30:16.349436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.709 [2024-07-25 05:30:16.362299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.709 [2024-07-25 05:30:16.362327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.709 [2024-07-25 05:30:16.372380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.709 [2024-07-25 05:30:16.372430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.709 [2024-07-25 05:30:16.384139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.709 [2024-07-25 05:30:16.384170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.709 [2024-07-25 05:30:16.395784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.709 [2024-07-25 05:30:16.395814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.709 [2024-07-25 05:30:16.407396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.709 [2024-07-25 05:30:16.407425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.419289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.419317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.431142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.431173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.442475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.442504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.453799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.453830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.465299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.465327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.476671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.476702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.487903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.487935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.499610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.499641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.511013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.511045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.522428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.522457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.533867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.533899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.547120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.547151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.557959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.557990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.569122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.569152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.582793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.582824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.593441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.593478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.605169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.605199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.616848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.616880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.628535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.628562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.640191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.640223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.651600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.651632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.967 [2024-07-25 05:30:16.663324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.967 [2024-07-25 05:30:16.663353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.225 [2024-07-25 05:30:16.675125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.225 [2024-07-25 05:30:16.675156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.225 [2024-07-25 05:30:16.687024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.225 [2024-07-25 05:30:16.687056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.225 [2024-07-25 05:30:16.698985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.225 [2024-07-25 05:30:16.699016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.225 [2024-07-25 05:30:16.711091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.225 [2024-07-25 05:30:16.711121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.225 [2024-07-25 05:30:16.722858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.225 [2024-07-25 05:30:16.722889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.225 [2024-07-25 05:30:16.733989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.225 [2024-07-25 05:30:16.734019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.225 [2024-07-25 05:30:16.745515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.225 [2024-07-25 05:30:16.745543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.225 [2024-07-25 05:30:16.757497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.225 [2024-07-25 05:30:16.757525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.225 [2024-07-25 05:30:16.769427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.769454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.226 [2024-07-25 05:30:16.782711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.782743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.226 [2024-07-25 05:30:16.793466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.793494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.226 [2024-07-25 05:30:16.805135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.805166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.226 [2024-07-25 05:30:16.816497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.816535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.226 [2024-07-25 05:30:16.827923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.827953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.226 [2024-07-25 05:30:16.839545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.839588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.226 [2024-07-25 05:30:16.851458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.851485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.226 [2024-07-25 05:30:16.864340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.864367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.226 [2024-07-25 05:30:16.875022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.875053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.226 [2024-07-25 05:30:16.887381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.887409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.226 [2024-07-25 05:30:16.898835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.898867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.226 [2024-07-25 05:30:16.910255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.910300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.226 [2024-07-25 05:30:16.921628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.226 [2024-07-25 05:30:16.921659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.484 [2024-07-25 05:30:16.933511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.484 [2024-07-25 05:30:16.933539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:16.945077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:16.945107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:16.958117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:16.958148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:16.968430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:16.968458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:16.980405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:16.980433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:16.992188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:16.992219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.003639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.003670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.014972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.015003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.026567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.026598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.038056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.038087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.049515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.049542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.060701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.060732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.074026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.074057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.085136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.085167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.097194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.097227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.109023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.109055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.120961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.120993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.133019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.133050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.144521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.144549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.156307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.156335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.168083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.168114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.485 [2024-07-25 05:30:17.181845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.485 [2024-07-25 05:30:17.181876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.193207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.193238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.204438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.204467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.216156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.216187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.228128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.228160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.239975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.240006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.251334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.251363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.263146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.263177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.274669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.274700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.286363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.286391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.298207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.298237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.309950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.309981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.322021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.322052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.334151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.334182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.345915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.345946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.357908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.357938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.369723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.369754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.381494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.381522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.392995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.393026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.405496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.405523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.419577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.419607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.431096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.431127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.744 [2024-07-25 05:30:17.442907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.744 [2024-07-25 05:30:17.442938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.454308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.454335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.465941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.465971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.477621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.477652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.489407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.489434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.503114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.503145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.514298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.514325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.525665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.525695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.537339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.537367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.548365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.548392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.559863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.559893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.571078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.571108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.582540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.582584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.593837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.593867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.605256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.605286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.617329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.617357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.003 [2024-07-25 05:30:17.629001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.003 [2024-07-25 05:30:17.629031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.004 [2024-07-25 05:30:17.640619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.004 [2024-07-25 05:30:17.640649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.004 [2024-07-25 05:30:17.652432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.004 [2024-07-25 05:30:17.652461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.004 [2024-07-25 05:30:17.664353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.004 [2024-07-25 05:30:17.664380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.004 [2024-07-25 05:30:17.679704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.004 [2024-07-25 05:30:17.679736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.004 [2024-07-25 05:30:17.690662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.004 [2024-07-25 05:30:17.690692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.004 [2024-07-25 05:30:17.703092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.004 [2024-07-25 05:30:17.703128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.715493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.715520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.727161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.727191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.738538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.738582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.750003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.750033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.761326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.761353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.772805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.772835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.784541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.784585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.796180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.796210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.807631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.807661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.819140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.819169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.830762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.830793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.842287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.842314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.853758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.853788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.865071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.865101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.876393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.876420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.888136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.888166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.899882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.899913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.911650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.911681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.924840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.924880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.935505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.935548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.947873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.947902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.262 [2024-07-25 05:30:17.959796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.262 [2024-07-25 05:30:17.959827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:17.971698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:17.971728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:17.985170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:17.985201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:17.996234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:17.996297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.007680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.007710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.019064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.019094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.030776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.030806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.042413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.042441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.054222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.054262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.065977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.066007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.077445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.077473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.088710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.088741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.100157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.100187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.111845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.111875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.123426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.123453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.134691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.134722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.146514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.146551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.158290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.158317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.171898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.171928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.183407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.183434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.195167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.195197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.206938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.206968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.521 [2024-07-25 05:30:18.218595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.521 [2024-07-25 05:30:18.218625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.229950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.229981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.236325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.236352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 00:10:24.780 Latency(us) 00:10:24.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.780 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:24.780 Nvme1n1 : 5.01 10951.30 85.56 0.00 0.00 11671.47 5412.79 25631.86 00:10:24.780 =================================================================================================================== 00:10:24.780 Total : 10951.30 85.56 0.00 0.00 11671.47 5412.79 25631.86 00:10:24.780 [2024-07-25 05:30:18.244343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.244368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.252352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.252378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.260412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.260460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.268434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.268480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.276471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.276522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.284480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.284526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.292506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.292553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.300522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.300605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.308556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.308603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.316574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.316622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.324589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.324639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.332615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.332662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.340630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.340679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.348657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.348703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.356674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.356719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.364696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.364742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.372680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.372707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.380714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.380746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.388776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.388824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.396787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.396833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.404807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.404849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.412795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.412822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.780 [2024-07-25 05:30:18.420864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.780 [2024-07-25 05:30:18.420909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.781 [2024-07-25 05:30:18.428882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.781 [2024-07-25 05:30:18.428928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.781 [2024-07-25 05:30:18.436904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.781 [2024-07-25 05:30:18.436947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.781 [2024-07-25 05:30:18.444878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.781 [2024-07-25 05:30:18.444902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.781 [2024-07-25 05:30:18.452897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.781 [2024-07-25 05:30:18.452922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.781 [2024-07-25 05:30:18.460921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.781 [2024-07-25 05:30:18.460946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1542658) - No such process 00:10:24.781 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1542658 00:10:24.781 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.781 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.781 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.781 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.781 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:24.781 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.781 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.781 delay0 00:10:24.781 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.781 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:24.781 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.781 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.039 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.039 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:25.039 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.039 [2024-07-25 05:30:18.576480] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:31.596 Initializing NVMe Controllers 00:10:31.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:31.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:31.596 Initialization complete. Launching workers. 00:10:31.596 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 281 00:10:31.596 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 551, failed to submit 50 00:10:31.596 success 382, unsuccess 169, failed 0 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:31.596 rmmod nvme_tcp 00:10:31.596 rmmod nvme_fabrics 00:10:31.596 rmmod nvme_keyring 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1540799 ']' 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1540799 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1540799 ']' 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1540799 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1540799 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1540799' 00:10:31.596 killing process with pid 1540799 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1540799 00:10:31.596 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1540799 00:10:31.596 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:31.596 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:31.596 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:31.596 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:31.596 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:31.596 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.596 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.596 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:33.530 00:10:33.530 real 0m27.645s 00:10:33.530 user 0m40.908s 00:10:33.530 sys 0m8.270s 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:33.530 ************************************ 00:10:33.530 END TEST nvmf_zcopy 00:10:33.530 ************************************ 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.530 ************************************ 00:10:33.530 START TEST nvmf_nmic 00:10:33.530 ************************************ 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:33.530 * Looking for test storage... 00:10:33.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:33.530 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:33.531 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.531 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:33.531 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:33.531 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:33.531 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.531 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.531 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.531 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:33.531 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:33.531 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:33.531 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:35.432 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:35.433 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:35.433 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:35.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:35.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:35.433 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:35.691 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:35.691 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:35.691 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:35.691 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:35.691 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:35.691 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:35.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:10:35.692 00:10:35.692 --- 10.0.0.2 ping statistics --- 00:10:35.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.692 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:35.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:10:35.692 00:10:35.692 --- 10.0.0.1 ping statistics --- 00:10:35.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.692 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1545933 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1545933 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1545933 ']' 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.692 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.692 [2024-07-25 05:30:29.339065] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:10:35.692 [2024-07-25 05:30:29.339165] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.692 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.950 [2024-07-25 05:30:29.405324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.950 [2024-07-25 05:30:29.494980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.950 [2024-07-25 05:30:29.495052] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.950 [2024-07-25 05:30:29.495065] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.950 [2024-07-25 05:30:29.495076] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.950 [2024-07-25 05:30:29.495085] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.950 [2024-07-25 05:30:29.495169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.950 [2024-07-25 05:30:29.495233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.950 [2024-07-25 05:30:29.495301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.950 [2024-07-25 05:30:29.495305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.950 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.950 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:35.950 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:35.950 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:35.950 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.950 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.950 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:35.950 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.950 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.950 [2024-07-25 05:30:29.643457] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.950 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.950 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:35.950 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.950 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.208 Malloc0 00:10:36.208 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.209 [2024-07-25 05:30:29.695800] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:36.209 test case1: single bdev can't be used in multiple subsystems 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.209 [2024-07-25 05:30:29.719652] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:36.209 [2024-07-25 05:30:29.719681] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:36.209 [2024-07-25 05:30:29.719695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.209 request: 00:10:36.209 { 00:10:36.209 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:36.209 "namespace": { 00:10:36.209 "bdev_name": "Malloc0", 00:10:36.209 "no_auto_visible": false 00:10:36.209 }, 00:10:36.209 "method": "nvmf_subsystem_add_ns", 00:10:36.209 "req_id": 1 00:10:36.209 } 00:10:36.209 Got JSON-RPC error response 00:10:36.209 response: 00:10:36.209 { 00:10:36.209 "code": -32602, 00:10:36.209 "message": "Invalid parameters" 00:10:36.209 } 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:36.209 Adding namespace failed - expected result. 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:36.209 test case2: host connect to nvmf target in multiple paths 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.209 [2024-07-25 05:30:29.727739] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.209 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:36.775 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:37.708 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:37.708 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:37.708 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:37.708 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:37.708 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:39.606 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:39.606 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:39.606 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:39.606 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:39.606 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:39.606 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:39.606 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:39.606 [global] 00:10:39.606 thread=1 00:10:39.606 invalidate=1 00:10:39.606 rw=write 00:10:39.606 time_based=1 00:10:39.606 runtime=1 00:10:39.606 ioengine=libaio 00:10:39.606 direct=1 00:10:39.606 bs=4096 00:10:39.606 iodepth=1 00:10:39.606 norandommap=0 00:10:39.606 numjobs=1 00:10:39.606 00:10:39.606 verify_dump=1 00:10:39.606 verify_backlog=512 00:10:39.606 verify_state_save=0 00:10:39.606 do_verify=1 00:10:39.606 verify=crc32c-intel 00:10:39.606 [job0] 00:10:39.606 filename=/dev/nvme0n1 00:10:39.606 Could not set queue depth (nvme0n1) 00:10:39.606 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.606 fio-3.35 00:10:39.606 Starting 1 thread 00:10:40.985 00:10:40.985 job0: (groupid=0, jobs=1): err= 0: pid=1546572: Thu Jul 25 05:30:34 2024 00:10:40.985 read: IOPS=1599, BW=6398KiB/s (6551kB/s)(6404KiB/1001msec) 00:10:40.985 slat (nsec): min=4245, max=58446, avg=18941.98, stdev=10443.99 00:10:40.985 clat (usec): min=245, max=613, avg=343.07, stdev=58.13 00:10:40.985 lat (usec): min=258, max=630, avg=362.01, stdev=63.07 00:10:40.985 clat percentiles (usec): 00:10:40.985 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 297], 00:10:40.985 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:10:40.985 | 70.00th=[ 355], 80.00th=[ 379], 90.00th=[ 445], 95.00th=[ 474], 00:10:40.985 | 99.00th=[ 494], 99.50th=[ 510], 99.90th=[ 611], 99.95th=[ 611], 00:10:40.985 | 99.99th=[ 611] 00:10:40.985 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:40.985 slat (nsec): min=5700, max=80705, avg=13976.54, stdev=6550.32 00:10:40.985 clat (usec): min=151, max=376, avg=183.02, stdev=19.35 00:10:40.985 lat (usec): min=159, max=392, avg=196.99, stdev=23.82 00:10:40.985 clat percentiles (usec): 00:10:40.985 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:10:40.985 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:10:40.985 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 206], 00:10:40.985 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 306], 99.95th=[ 363], 00:10:40.986 | 99.99th=[ 375] 00:10:40.986 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:40.986 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:40.986 lat (usec) : 250=55.17%, 500=44.48%, 750=0.36% 00:10:40.986 cpu : usr=3.30%, sys=6.20%, ctx=3649, majf=0, minf=2 00:10:40.986 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.986 issued rwts: total=1601,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.986 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.986 00:10:40.986 Run status group 0 (all jobs): 00:10:40.986 READ: bw=6398KiB/s (6551kB/s), 6398KiB/s-6398KiB/s (6551kB/s-6551kB/s), io=6404KiB (6558kB), run=1001-1001msec 00:10:40.986 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:40.986 00:10:40.986 Disk stats (read/write): 00:10:40.986 nvme0n1: ios=1586/1652, merge=0/0, ticks=539/301, in_queue=840, util=91.88% 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.986 rmmod nvme_tcp 00:10:40.986 rmmod nvme_fabrics 00:10:40.986 rmmod nvme_keyring 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1545933 ']' 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1545933 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1545933 ']' 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1545933 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1545933 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1545933' 00:10:40.986 killing process with pid 1545933 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1545933 00:10:40.986 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1545933 00:10:41.244 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:41.244 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:41.244 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:41.244 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:41.244 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:41.244 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.244 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.244 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.772 05:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:43.772 00:10:43.772 real 0m9.854s 00:10:43.772 user 0m22.143s 00:10:43.772 sys 0m2.475s 00:10:43.772 05:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.772 05:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.772 ************************************ 00:10:43.772 END TEST nvmf_nmic 00:10:43.772 ************************************ 00:10:43.772 05:30:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:43.772 05:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:43.772 05:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.772 05:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.772 ************************************ 00:10:43.772 START TEST nvmf_fio_target 00:10:43.772 ************************************ 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:43.773 * Looking for test storage... 00:10:43.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:43.773 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.672 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.672 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:45.673 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:45.673 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:45.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:45.673 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.673 05:30:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.673 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.673 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.673 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:45.673 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.673 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.673 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.673 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:45.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:10:45.673 00:10:45.673 --- 10.0.0.2 ping statistics --- 00:10:45.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.673 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:10:45.673 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:10:45.673 00:10:45.673 --- 10.0.0.1 ping statistics --- 00:10:45.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.673 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:10:45.673 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.673 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:45.673 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:45.673 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1548644 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1548644 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1548644 ']' 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.674 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.674 [2024-07-25 05:30:39.148998] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:10:45.674 [2024-07-25 05:30:39.149094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.674 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.674 [2024-07-25 05:30:39.217835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.674 [2024-07-25 05:30:39.313674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.674 [2024-07-25 05:30:39.313731] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.674 [2024-07-25 05:30:39.313755] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.674 [2024-07-25 05:30:39.313770] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.674 [2024-07-25 05:30:39.313782] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.674 [2024-07-25 05:30:39.313875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.674 [2024-07-25 05:30:39.313955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.674 [2024-07-25 05:30:39.314007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.674 [2024-07-25 05:30:39.314010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.932 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:45.932 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:45.932 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:45.932 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:45.932 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.932 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.932 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:46.190 [2024-07-25 05:30:39.742692] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.190 05:30:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.448 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:46.448 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.705 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:46.705 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.963 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:46.963 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.221 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:47.221 05:30:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:47.478 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.742 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:47.742 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.016 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:48.016 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.274 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:48.274 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:48.531 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:48.789 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:48.789 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:49.047 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:49.047 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:49.305 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.562 [2024-07-25 05:30:43.138408] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.562 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:49.818 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:50.075 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:50.639 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:50.639 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:50.639 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:50.639 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:50.639 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:50.639 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:53.164 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:53.164 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:53.164 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.164 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:53.164 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.164 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:53.164 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:53.164 [global] 00:10:53.164 thread=1 00:10:53.164 invalidate=1 00:10:53.164 rw=write 00:10:53.164 time_based=1 00:10:53.164 runtime=1 00:10:53.164 ioengine=libaio 00:10:53.164 direct=1 00:10:53.164 bs=4096 00:10:53.164 iodepth=1 00:10:53.164 norandommap=0 00:10:53.164 numjobs=1 00:10:53.164 00:10:53.164 verify_dump=1 00:10:53.164 verify_backlog=512 00:10:53.164 verify_state_save=0 00:10:53.164 do_verify=1 00:10:53.164 verify=crc32c-intel 00:10:53.164 [job0] 00:10:53.164 filename=/dev/nvme0n1 00:10:53.164 [job1] 00:10:53.164 filename=/dev/nvme0n2 00:10:53.164 [job2] 00:10:53.164 filename=/dev/nvme0n3 00:10:53.164 [job3] 00:10:53.164 filename=/dev/nvme0n4 00:10:53.164 Could not set queue depth (nvme0n1) 00:10:53.164 Could not set queue depth (nvme0n2) 00:10:53.164 Could not set queue depth (nvme0n3) 00:10:53.164 Could not set queue depth (nvme0n4) 00:10:53.164 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.164 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.164 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.164 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.164 fio-3.35 00:10:53.164 Starting 4 threads 00:10:54.119 00:10:54.119 job0: (groupid=0, jobs=1): err= 0: pid=1549624: Thu Jul 25 05:30:47 2024 00:10:54.119 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:10:54.119 slat (nsec): min=11671, max=33536, avg=24445.41, stdev=9500.50 00:10:54.119 clat (usec): min=40834, max=41992, avg=41206.13, stdev=426.29 00:10:54.119 lat (usec): min=40867, max=42025, avg=41230.57, stdev=430.53 00:10:54.119 clat percentiles (usec): 00:10:54.119 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:54.119 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:54.119 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:54.119 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:54.119 | 99.99th=[42206] 00:10:54.119 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:10:54.119 slat (nsec): min=7228, max=66935, avg=15052.58, stdev=7678.79 00:10:54.119 clat (usec): min=177, max=346, avg=221.73, stdev=22.82 00:10:54.119 lat (usec): min=185, max=355, avg=236.78, stdev=25.80 00:10:54.119 clat percentiles (usec): 00:10:54.119 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:10:54.119 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:10:54.119 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 260], 00:10:54.119 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 347], 99.95th=[ 347], 00:10:54.119 | 99.99th=[ 347] 00:10:54.119 bw ( KiB/s): min= 4096, max= 4096, per=29.62%, avg=4096.00, stdev= 0.00, samples=1 00:10:54.119 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:54.119 lat (usec) : 250=86.14%, 500=9.74% 00:10:54.119 lat (msec) : 50=4.12% 00:10:54.119 cpu : usr=0.39%, sys=1.17%, ctx=534, majf=0, minf=2 00:10:54.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.119 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.119 job1: (groupid=0, jobs=1): err= 0: pid=1549634: Thu Jul 25 05:30:47 2024 00:10:54.119 read: IOPS=20, BW=83.7KiB/s (85.7kB/s)(84.0KiB/1004msec) 00:10:54.119 slat (nsec): min=9198, max=36867, avg=25854.62, stdev=10567.45 00:10:54.119 clat (usec): min=40513, max=41013, avg=40941.94, stdev=105.88 00:10:54.119 lat (usec): min=40523, max=41047, avg=40967.80, stdev=108.06 00:10:54.119 clat percentiles (usec): 00:10:54.119 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:54.119 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:54.119 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:54.119 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:54.119 | 99.99th=[41157] 00:10:54.119 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:10:54.119 slat (nsec): min=7190, max=66700, avg=16458.37, stdev=8389.16 00:10:54.119 clat (usec): min=192, max=400, avg=258.31, stdev=35.81 00:10:54.119 lat (usec): min=200, max=437, avg=274.76, stdev=39.81 00:10:54.119 clat percentiles (usec): 00:10:54.119 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 221], 20.00th=[ 233], 00:10:54.119 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 260], 00:10:54.119 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 318], 00:10:54.119 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 400], 99.95th=[ 400], 00:10:54.119 | 99.99th=[ 400] 00:10:54.119 bw ( KiB/s): min= 4096, max= 4096, per=29.62%, avg=4096.00, stdev= 0.00, samples=1 00:10:54.119 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:54.119 lat (usec) : 250=46.15%, 500=49.91% 00:10:54.119 lat (msec) : 50=3.94% 00:10:54.119 cpu : usr=1.00%, sys=0.80%, ctx=534, majf=0, minf=1 00:10:54.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.119 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.119 job2: (groupid=0, jobs=1): err= 0: pid=1549679: Thu Jul 25 05:30:47 2024 00:10:54.119 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:10:54.119 slat (nsec): min=7814, max=33649, avg=24974.90, stdev=10203.80 00:10:54.119 clat (usec): min=40945, max=42052, avg=41314.01, stdev=472.34 00:10:54.119 lat (usec): min=40979, max=42085, avg=41338.99, stdev=476.12 00:10:54.119 clat percentiles (usec): 00:10:54.119 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:54.119 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:54.119 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:54.119 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:54.119 | 99.99th=[42206] 00:10:54.119 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:54.119 slat (nsec): min=6961, max=44148, avg=15587.55, stdev=6606.59 00:10:54.119 clat (usec): min=193, max=390, avg=238.93, stdev=28.81 00:10:54.119 lat (usec): min=204, max=397, avg=254.51, stdev=26.87 00:10:54.119 clat percentiles (usec): 00:10:54.119 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 217], 00:10:54.119 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:10:54.119 | 70.00th=[ 247], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[ 293], 00:10:54.119 | 99.00th=[ 322], 99.50th=[ 367], 99.90th=[ 392], 99.95th=[ 392], 00:10:54.119 | 99.99th=[ 392] 00:10:54.119 bw ( KiB/s): min= 4096, max= 4096, per=29.62%, avg=4096.00, stdev= 0.00, samples=1 00:10:54.119 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:54.119 lat (usec) : 250=70.73%, 500=25.33% 00:10:54.119 lat (msec) : 50=3.94% 00:10:54.119 cpu : usr=0.50%, sys=0.70%, ctx=533, majf=0, minf=1 00:10:54.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.119 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.119 job3: (groupid=0, jobs=1): err= 0: pid=1549687: Thu Jul 25 05:30:47 2024 00:10:54.119 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:54.119 slat (nsec): min=6695, max=21098, avg=7465.96, stdev=894.19 00:10:54.119 clat (usec): min=270, max=564, avg=335.74, stdev=55.64 00:10:54.119 lat (usec): min=277, max=578, avg=343.21, stdev=55.74 00:10:54.119 clat percentiles (usec): 00:10:54.119 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 293], 00:10:54.119 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 314], 00:10:54.119 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 441], 95.00th=[ 457], 00:10:54.119 | 99.00th=[ 482], 99.50th=[ 490], 99.90th=[ 553], 99.95th=[ 562], 00:10:54.119 | 99.99th=[ 562] 00:10:54.119 write: IOPS=2025, BW=8104KiB/s (8298kB/s)(8112KiB/1001msec); 0 zone resets 00:10:54.119 slat (nsec): min=8725, max=38793, avg=10321.42, stdev=1529.33 00:10:54.119 clat (usec): min=181, max=353, avg=218.76, stdev=23.51 00:10:54.119 lat (usec): min=191, max=365, avg=229.08, stdev=23.96 00:10:54.119 clat percentiles (usec): 00:10:54.119 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 200], 00:10:54.119 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:10:54.119 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 255], 95.00th=[ 269], 00:10:54.119 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 338], 99.95th=[ 343], 00:10:54.119 | 99.99th=[ 355] 00:10:54.119 bw ( KiB/s): min= 8192, max= 8192, per=59.24%, avg=8192.00, stdev= 0.00, samples=1 00:10:54.119 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:54.119 lat (usec) : 250=49.80%, 500=50.14%, 750=0.06% 00:10:54.119 cpu : usr=1.90%, sys=4.70%, ctx=3565, majf=0, minf=1 00:10:54.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.120 issued rwts: total=1536,2028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.120 00:10:54.120 Run status group 0 (all jobs): 00:10:54.120 READ: bw=6208KiB/s (6357kB/s), 83.7KiB/s-6138KiB/s (85.7kB/s-6285kB/s), io=6400KiB (6554kB), run=1001-1031msec 00:10:54.120 WRITE: bw=13.5MiB/s (14.2MB/s), 1986KiB/s-8104KiB/s (2034kB/s-8298kB/s), io=13.9MiB (14.6MB), run=1001-1031msec 00:10:54.120 00:10:54.120 Disk stats (read/write): 00:10:54.120 nvme0n1: ios=67/512, merge=0/0, ticks=736/103, in_queue=839, util=86.47% 00:10:54.120 nvme0n2: ios=37/512, merge=0/0, ticks=719/125, in_queue=844, util=86.35% 00:10:54.120 nvme0n3: ios=17/512, merge=0/0, ticks=701/111, in_queue=812, util=88.75% 00:10:54.120 nvme0n4: ios=1450/1536, merge=0/0, ticks=1417/330, in_queue=1747, util=97.56% 00:10:54.120 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:54.120 [global] 00:10:54.120 thread=1 00:10:54.120 invalidate=1 00:10:54.120 rw=randwrite 00:10:54.120 time_based=1 00:10:54.120 runtime=1 00:10:54.120 ioengine=libaio 00:10:54.120 direct=1 00:10:54.120 bs=4096 00:10:54.120 iodepth=1 00:10:54.120 norandommap=0 00:10:54.120 numjobs=1 00:10:54.120 00:10:54.120 verify_dump=1 00:10:54.120 verify_backlog=512 00:10:54.120 verify_state_save=0 00:10:54.120 do_verify=1 00:10:54.120 verify=crc32c-intel 00:10:54.120 [job0] 00:10:54.120 filename=/dev/nvme0n1 00:10:54.120 [job1] 00:10:54.120 filename=/dev/nvme0n2 00:10:54.120 [job2] 00:10:54.120 filename=/dev/nvme0n3 00:10:54.120 [job3] 00:10:54.120 filename=/dev/nvme0n4 00:10:54.120 Could not set queue depth (nvme0n1) 00:10:54.120 Could not set queue depth (nvme0n2) 00:10:54.120 Could not set queue depth (nvme0n3) 00:10:54.120 Could not set queue depth (nvme0n4) 00:10:54.377 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.377 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.378 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.378 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.378 fio-3.35 00:10:54.378 Starting 4 threads 00:10:55.751 00:10:55.751 job0: (groupid=0, jobs=1): err= 0: pid=1549964: Thu Jul 25 05:30:49 2024 00:10:55.751 read: IOPS=18, BW=73.1KiB/s (74.8kB/s)(76.0KiB/1040msec) 00:10:55.751 slat (nsec): min=9261, max=37873, avg=20866.89, stdev=7991.30 00:10:55.751 clat (usec): min=40903, max=42050, avg=41498.82, stdev=520.62 00:10:55.751 lat (usec): min=40918, max=42068, avg=41519.69, stdev=519.96 00:10:55.751 clat percentiles (usec): 00:10:55.751 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:55.751 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:10:55.751 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:55.752 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:55.752 | 99.99th=[42206] 00:10:55.752 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:55.752 slat (nsec): min=8299, max=55961, avg=18667.90, stdev=8243.93 00:10:55.752 clat (usec): min=202, max=1132, avg=466.67, stdev=160.44 00:10:55.752 lat (usec): min=215, max=1157, avg=485.34, stdev=159.57 00:10:55.752 clat percentiles (usec): 00:10:55.752 | 1.00th=[ 221], 5.00th=[ 249], 10.00th=[ 277], 20.00th=[ 314], 00:10:55.752 | 30.00th=[ 371], 40.00th=[ 404], 50.00th=[ 445], 60.00th=[ 490], 00:10:55.752 | 70.00th=[ 545], 80.00th=[ 603], 90.00th=[ 685], 95.00th=[ 766], 00:10:55.752 | 99.00th=[ 914], 99.50th=[ 979], 99.90th=[ 1139], 99.95th=[ 1139], 00:10:55.752 | 99.99th=[ 1139] 00:10:55.752 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:10:55.752 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:55.752 lat (usec) : 250=4.90%, 500=55.74%, 750=30.32%, 1000=5.27% 00:10:55.752 lat (msec) : 2=0.19%, 50=3.58% 00:10:55.752 cpu : usr=0.58%, sys=1.15%, ctx=533, majf=0, minf=2 00:10:55.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.752 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.752 job1: (groupid=0, jobs=1): err= 0: pid=1549971: Thu Jul 25 05:30:49 2024 00:10:55.752 read: IOPS=1265, BW=5063KiB/s (5184kB/s)(5068KiB/1001msec) 00:10:55.752 slat (nsec): min=4744, max=62339, avg=15929.68, stdev=8800.87 00:10:55.752 clat (usec): min=238, max=41997, avg=494.94, stdev=2824.43 00:10:55.752 lat (usec): min=245, max=42012, avg=510.87, stdev=2824.67 00:10:55.752 clat percentiles (usec): 00:10:55.752 | 1.00th=[ 245], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 262], 00:10:55.752 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 297], 00:10:55.752 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 383], 95.00th=[ 408], 00:10:55.752 | 99.00th=[ 482], 99.50th=[ 562], 99.90th=[42206], 99.95th=[42206], 00:10:55.752 | 99.99th=[42206] 00:10:55.752 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:55.752 slat (nsec): min=6207, max=58575, avg=12270.23, stdev=5562.93 00:10:55.752 clat (usec): min=155, max=554, avg=209.49, stdev=58.07 00:10:55.752 lat (usec): min=168, max=563, avg=221.76, stdev=57.40 00:10:55.752 clat percentiles (usec): 00:10:55.752 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:10:55.752 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 194], 00:10:55.752 | 70.00th=[ 210], 80.00th=[ 231], 90.00th=[ 281], 95.00th=[ 338], 00:10:55.752 | 99.00th=[ 453], 99.50th=[ 482], 99.90th=[ 529], 99.95th=[ 553], 00:10:55.752 | 99.99th=[ 553] 00:10:55.752 bw ( KiB/s): min= 8192, max= 8192, per=69.33%, avg=8192.00, stdev= 0.00, samples=1 00:10:55.752 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:55.752 lat (usec) : 250=47.98%, 500=51.55%, 750=0.25% 00:10:55.752 lat (msec) : 50=0.21% 00:10:55.752 cpu : usr=2.50%, sys=3.80%, ctx=2803, majf=0, minf=1 00:10:55.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.752 issued rwts: total=1267,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.752 job2: (groupid=0, jobs=1): err= 0: pid=1549973: Thu Jul 25 05:30:49 2024 00:10:55.752 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:10:55.752 slat (nsec): min=8504, max=43258, avg=21078.59, stdev=8877.26 00:10:55.752 clat (usec): min=442, max=42021, avg=39599.68, stdev=8762.51 00:10:55.752 lat (usec): min=458, max=42039, avg=39620.76, stdev=8763.55 00:10:55.752 clat percentiles (usec): 00:10:55.752 | 1.00th=[ 445], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:55.752 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:10:55.752 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:55.752 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:55.752 | 99.99th=[42206] 00:10:55.752 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:10:55.752 slat (nsec): min=6162, max=41766, avg=11156.70, stdev=5679.79 00:10:55.752 clat (usec): min=177, max=571, avg=270.92, stdev=74.18 00:10:55.752 lat (usec): min=184, max=578, avg=282.08, stdev=75.52 00:10:55.752 clat percentiles (usec): 00:10:55.752 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 208], 00:10:55.752 | 30.00th=[ 219], 40.00th=[ 231], 50.00th=[ 243], 60.00th=[ 269], 00:10:55.752 | 70.00th=[ 302], 80.00th=[ 338], 90.00th=[ 388], 95.00th=[ 412], 00:10:55.752 | 99.00th=[ 461], 99.50th=[ 494], 99.90th=[ 570], 99.95th=[ 570], 00:10:55.752 | 99.99th=[ 570] 00:10:55.752 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:10:55.752 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:55.752 lat (usec) : 250=50.75%, 500=44.94%, 750=0.37% 00:10:55.752 lat (msec) : 50=3.93% 00:10:55.752 cpu : usr=0.20%, sys=0.69%, ctx=535, majf=0, minf=1 00:10:55.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.752 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.752 job3: (groupid=0, jobs=1): err= 0: pid=1549974: Thu Jul 25 05:30:49 2024 00:10:55.752 read: IOPS=23, BW=92.8KiB/s (95.0kB/s)(96.0KiB/1035msec) 00:10:55.752 slat (nsec): min=8382, max=34715, avg=17007.42, stdev=6455.66 00:10:55.752 clat (usec): min=361, max=42077, avg=32851.23, stdev=17009.47 00:10:55.752 lat (usec): min=369, max=42095, avg=32868.24, stdev=17013.67 00:10:55.752 clat percentiles (usec): 00:10:55.752 | 1.00th=[ 363], 5.00th=[ 371], 10.00th=[ 383], 20.00th=[ 465], 00:10:55.752 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:55.752 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:55.752 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:55.752 | 99.99th=[42206] 00:10:55.752 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:10:55.752 slat (nsec): min=6647, max=49471, avg=17185.04, stdev=6483.84 00:10:55.752 clat (usec): min=248, max=783, avg=458.68, stdev=127.78 00:10:55.752 lat (usec): min=264, max=800, avg=475.86, stdev=127.30 00:10:55.752 clat percentiles (usec): 00:10:55.752 | 1.00th=[ 255], 5.00th=[ 277], 10.00th=[ 293], 20.00th=[ 343], 00:10:55.752 | 30.00th=[ 379], 40.00th=[ 396], 50.00th=[ 424], 60.00th=[ 486], 00:10:55.752 | 70.00th=[ 537], 80.00th=[ 586], 90.00th=[ 652], 95.00th=[ 676], 00:10:55.752 | 99.00th=[ 725], 99.50th=[ 775], 99.90th=[ 783], 99.95th=[ 783], 00:10:55.752 | 99.99th=[ 783] 00:10:55.752 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:10:55.752 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:55.752 lat (usec) : 250=0.37%, 500=59.89%, 750=35.63%, 1000=0.56% 00:10:55.752 lat (msec) : 50=3.54% 00:10:55.752 cpu : usr=0.10%, sys=1.26%, ctx=538, majf=0, minf=1 00:10:55.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.752 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.752 00:10:55.752 Run status group 0 (all jobs): 00:10:55.752 READ: bw=5123KiB/s (5246kB/s), 73.1KiB/s-5063KiB/s (74.8kB/s-5184kB/s), io=5328KiB (5456kB), run=1001-1040msec 00:10:55.752 WRITE: bw=11.5MiB/s (12.1MB/s), 1969KiB/s-6138KiB/s (2016kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1040msec 00:10:55.752 00:10:55.752 Disk stats (read/write): 00:10:55.752 nvme0n1: ios=56/512, merge=0/0, ticks=1415/229, in_queue=1644, util=97.49% 00:10:55.752 nvme0n2: ios=1220/1536, merge=0/0, ticks=425/295, in_queue=720, util=86.33% 00:10:55.752 nvme0n3: ios=39/512, merge=0/0, ticks=1574/132, in_queue=1706, util=97.69% 00:10:55.752 nvme0n4: ios=76/512, merge=0/0, ticks=1524/232, in_queue=1756, util=97.67% 00:10:55.752 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:55.752 [global] 00:10:55.752 thread=1 00:10:55.752 invalidate=1 00:10:55.752 rw=write 00:10:55.752 time_based=1 00:10:55.752 runtime=1 00:10:55.752 ioengine=libaio 00:10:55.752 direct=1 00:10:55.752 bs=4096 00:10:55.752 iodepth=128 00:10:55.752 norandommap=0 00:10:55.752 numjobs=1 00:10:55.752 00:10:55.752 verify_dump=1 00:10:55.752 verify_backlog=512 00:10:55.752 verify_state_save=0 00:10:55.752 do_verify=1 00:10:55.752 verify=crc32c-intel 00:10:55.752 [job0] 00:10:55.752 filename=/dev/nvme0n1 00:10:55.752 [job1] 00:10:55.752 filename=/dev/nvme0n2 00:10:55.752 [job2] 00:10:55.752 filename=/dev/nvme0n3 00:10:55.752 [job3] 00:10:55.752 filename=/dev/nvme0n4 00:10:55.752 Could not set queue depth (nvme0n1) 00:10:55.752 Could not set queue depth (nvme0n2) 00:10:55.752 Could not set queue depth (nvme0n3) 00:10:55.752 Could not set queue depth (nvme0n4) 00:10:56.010 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.010 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.010 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.010 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.010 fio-3.35 00:10:56.010 Starting 4 threads 00:10:57.384 00:10:57.384 job0: (groupid=0, jobs=1): err= 0: pid=1550204: Thu Jul 25 05:30:50 2024 00:10:57.384 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:10:57.384 slat (usec): min=3, max=16604, avg=113.18, stdev=764.39 00:10:57.384 clat (usec): min=3737, max=59473, avg=14318.36, stdev=5662.91 00:10:57.384 lat (usec): min=3750, max=65170, avg=14431.54, stdev=5717.23 00:10:57.384 clat percentiles (usec): 00:10:57.384 | 1.00th=[ 7242], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10552], 00:10:57.384 | 30.00th=[11600], 40.00th=[12649], 50.00th=[13304], 60.00th=[14091], 00:10:57.384 | 70.00th=[15008], 80.00th=[16188], 90.00th=[19268], 95.00th=[23987], 00:10:57.384 | 99.00th=[41157], 99.50th=[42206], 99.90th=[59507], 99.95th=[59507], 00:10:57.384 | 99.99th=[59507] 00:10:57.384 write: IOPS=4149, BW=16.2MiB/s (17.0MB/s)(16.4MiB/1010msec); 0 zone resets 00:10:57.385 slat (usec): min=4, max=11307, avg=116.49, stdev=614.23 00:10:57.385 clat (usec): min=1357, max=68535, avg=16573.81, stdev=13607.18 00:10:57.385 lat (usec): min=1379, max=68552, avg=16690.30, stdev=13686.68 00:10:57.385 clat percentiles (usec): 00:10:57.385 | 1.00th=[ 3851], 5.00th=[ 6259], 10.00th=[ 6915], 20.00th=[ 9765], 00:10:57.385 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11600], 00:10:57.385 | 70.00th=[14746], 80.00th=[23200], 90.00th=[28181], 95.00th=[54789], 00:10:57.385 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:10:57.385 | 99.99th=[68682] 00:10:57.385 bw ( KiB/s): min= 9632, max=23136, per=24.69%, avg=16384.00, stdev=9548.77, samples=2 00:10:57.385 iops : min= 2408, max= 5784, avg=4096.00, stdev=2387.19, samples=2 00:10:57.385 lat (msec) : 2=0.11%, 4=0.56%, 10=16.63%, 20=64.96%, 50=14.91% 00:10:57.385 lat (msec) : 100=2.84% 00:10:57.385 cpu : usr=5.15%, sys=9.22%, ctx=498, majf=0, minf=1 00:10:57.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:57.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.385 issued rwts: total=4096,4191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.385 job1: (groupid=0, jobs=1): err= 0: pid=1550205: Thu Jul 25 05:30:50 2024 00:10:57.385 read: IOPS=4960, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1005msec) 00:10:57.385 slat (usec): min=2, max=15597, avg=102.17, stdev=689.25 00:10:57.385 clat (usec): min=4567, max=55625, avg=13447.71, stdev=7772.57 00:10:57.385 lat (usec): min=4573, max=55634, avg=13549.89, stdev=7836.65 00:10:57.385 clat percentiles (usec): 00:10:57.385 | 1.00th=[ 5473], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10421], 00:10:57.385 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11469], 00:10:57.385 | 70.00th=[11994], 80.00th=[13042], 90.00th=[17695], 95.00th=[31327], 00:10:57.385 | 99.00th=[47449], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:10:57.385 | 99.99th=[55837] 00:10:57.385 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:10:57.385 slat (usec): min=3, max=14688, avg=85.29, stdev=581.68 00:10:57.385 clat (usec): min=1483, max=28667, avg=11736.57, stdev=3163.63 00:10:57.385 lat (usec): min=1496, max=31892, avg=11821.86, stdev=3193.29 00:10:57.385 clat percentiles (usec): 00:10:57.385 | 1.00th=[ 5342], 5.00th=[ 7439], 10.00th=[ 9634], 20.00th=[10290], 00:10:57.385 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:10:57.385 | 70.00th=[11863], 80.00th=[13304], 90.00th=[14746], 95.00th=[17171], 00:10:57.385 | 99.00th=[24773], 99.50th=[28181], 99.90th=[28443], 99.95th=[28443], 00:10:57.385 | 99.99th=[28705] 00:10:57.385 bw ( KiB/s): min=16504, max=24456, per=30.86%, avg=20480.00, stdev=5622.91, samples=2 00:10:57.385 iops : min= 4126, max= 6114, avg=5120.00, stdev=1405.73, samples=2 00:10:57.385 lat (msec) : 2=0.06%, 4=0.16%, 10=13.09%, 20=80.69%, 50=5.53% 00:10:57.385 lat (msec) : 100=0.47% 00:10:57.385 cpu : usr=5.88%, sys=9.06%, ctx=410, majf=0, minf=1 00:10:57.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:57.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.385 issued rwts: total=4985,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.385 job2: (groupid=0, jobs=1): err= 0: pid=1550206: Thu Jul 25 05:30:50 2024 00:10:57.385 read: IOPS=3756, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1007msec) 00:10:57.385 slat (usec): min=3, max=28752, avg=128.93, stdev=1184.23 00:10:57.385 clat (usec): min=4861, max=83429, avg=15569.82, stdev=7660.27 00:10:57.385 lat (usec): min=4879, max=83434, avg=15698.74, stdev=7785.10 00:10:57.385 clat percentiles (usec): 00:10:57.385 | 1.00th=[ 6128], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[11469], 00:10:57.385 | 30.00th=[11731], 40.00th=[12125], 50.00th=[13042], 60.00th=[14222], 00:10:57.385 | 70.00th=[17433], 80.00th=[21103], 90.00th=[23200], 95.00th=[23725], 00:10:57.385 | 99.00th=[58459], 99.50th=[58459], 99.90th=[83362], 99.95th=[83362], 00:10:57.385 | 99.99th=[83362] 00:10:57.385 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:10:57.385 slat (usec): min=3, max=32734, avg=113.61, stdev=839.93 00:10:57.385 clat (usec): min=3339, max=87507, avg=16320.42, stdev=13292.51 00:10:57.385 lat (usec): min=3353, max=88227, avg=16434.03, stdev=13340.06 00:10:57.385 clat percentiles (usec): 00:10:57.385 | 1.00th=[ 4621], 5.00th=[ 7046], 10.00th=[ 8029], 20.00th=[ 9765], 00:10:57.385 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12387], 60.00th=[12518], 00:10:57.385 | 70.00th=[14484], 80.00th=[16450], 90.00th=[31065], 95.00th=[53216], 00:10:57.385 | 99.00th=[84411], 99.50th=[85459], 99.90th=[87557], 99.95th=[87557], 00:10:57.385 | 99.99th=[87557] 00:10:57.385 bw ( KiB/s): min=11736, max=21032, per=24.69%, avg=16384.00, stdev=6573.26, samples=2 00:10:57.385 iops : min= 2934, max= 5258, avg=4096.00, stdev=1643.32, samples=2 00:10:57.385 lat (msec) : 4=0.18%, 10=14.72%, 20=66.51%, 50=15.03%, 100=3.57% 00:10:57.385 cpu : usr=5.86%, sys=7.36%, ctx=410, majf=0, minf=1 00:10:57.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:57.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.385 issued rwts: total=3783,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.385 job3: (groupid=0, jobs=1): err= 0: pid=1550207: Thu Jul 25 05:30:50 2024 00:10:57.385 read: IOPS=3410, BW=13.3MiB/s (14.0MB/s)(14.1MiB/1055msec) 00:10:57.385 slat (usec): min=2, max=14969, avg=117.33, stdev=800.84 00:10:57.385 clat (usec): min=1203, max=56565, avg=14813.79, stdev=5791.04 00:10:57.385 lat (usec): min=1218, max=56582, avg=14931.12, stdev=5827.57 00:10:57.385 clat percentiles (usec): 00:10:57.385 | 1.00th=[ 2474], 5.00th=[ 6390], 10.00th=[ 8979], 20.00th=[11863], 00:10:57.385 | 30.00th=[12780], 40.00th=[13566], 50.00th=[13960], 60.00th=[14877], 00:10:57.385 | 70.00th=[15926], 80.00th=[17433], 90.00th=[21365], 95.00th=[24249], 00:10:57.385 | 99.00th=[33424], 99.50th=[34866], 99.90th=[56361], 99.95th=[56361], 00:10:57.385 | 99.99th=[56361] 00:10:57.385 write: IOPS=3882, BW=15.2MiB/s (15.9MB/s)(16.0MiB/1055msec); 0 zone resets 00:10:57.385 slat (usec): min=3, max=13853, avg=129.15, stdev=649.59 00:10:57.385 clat (usec): min=1977, max=97056, avg=19665.02, stdev=13468.28 00:10:57.385 lat (usec): min=3609, max=97067, avg=19794.17, stdev=13537.92 00:10:57.385 clat percentiles (usec): 00:10:57.385 | 1.00th=[ 4015], 5.00th=[ 7242], 10.00th=[ 9110], 20.00th=[11731], 00:10:57.385 | 30.00th=[13042], 40.00th=[13566], 50.00th=[15533], 60.00th=[16581], 00:10:57.385 | 70.00th=[22676], 80.00th=[24511], 90.00th=[35914], 95.00th=[43254], 00:10:57.385 | 99.00th=[82314], 99.50th=[92799], 99.90th=[96994], 99.95th=[96994], 00:10:57.385 | 99.99th=[96994] 00:10:57.385 bw ( KiB/s): min=11384, max=20480, per=24.01%, avg=15932.00, stdev=6431.84, samples=2 00:10:57.385 iops : min= 2846, max= 5120, avg=3983.00, stdev=1607.96, samples=2 00:10:57.385 lat (msec) : 2=0.13%, 4=1.23%, 10=12.69%, 20=61.68%, 50=22.62% 00:10:57.385 lat (msec) : 100=1.65% 00:10:57.385 cpu : usr=3.98%, sys=6.45%, ctx=447, majf=0, minf=1 00:10:57.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:57.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.385 issued rwts: total=3598,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.385 00:10:57.385 Run status group 0 (all jobs): 00:10:57.385 READ: bw=61.0MiB/s (63.9MB/s), 13.3MiB/s-19.4MiB/s (14.0MB/s-20.3MB/s), io=64.3MiB (67.4MB), run=1005-1055msec 00:10:57.385 WRITE: bw=64.8MiB/s (68.0MB/s), 15.2MiB/s-19.9MiB/s (15.9MB/s-20.9MB/s), io=68.4MiB (71.7MB), run=1005-1055msec 00:10:57.385 00:10:57.385 Disk stats (read/write): 00:10:57.385 nvme0n1: ios=3621/3823, merge=0/0, ticks=38543/39290, in_queue=77833, util=97.49% 00:10:57.385 nvme0n2: ios=4135/4160, merge=0/0, ticks=30504/25371, in_queue=55875, util=97.66% 00:10:57.385 nvme0n3: ios=3088/3117, merge=0/0, ticks=47275/43774, in_queue=91049, util=98.54% 00:10:57.385 nvme0n4: ios=3072/3584, merge=0/0, ticks=32717/54862, in_queue=87579, util=89.68% 00:10:57.385 05:30:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:57.385 [global] 00:10:57.385 thread=1 00:10:57.385 invalidate=1 00:10:57.385 rw=randwrite 00:10:57.385 time_based=1 00:10:57.385 runtime=1 00:10:57.385 ioengine=libaio 00:10:57.385 direct=1 00:10:57.385 bs=4096 00:10:57.385 iodepth=128 00:10:57.385 norandommap=0 00:10:57.385 numjobs=1 00:10:57.385 00:10:57.385 verify_dump=1 00:10:57.385 verify_backlog=512 00:10:57.385 verify_state_save=0 00:10:57.385 do_verify=1 00:10:57.385 verify=crc32c-intel 00:10:57.385 [job0] 00:10:57.385 filename=/dev/nvme0n1 00:10:57.385 [job1] 00:10:57.385 filename=/dev/nvme0n2 00:10:57.385 [job2] 00:10:57.385 filename=/dev/nvme0n3 00:10:57.385 [job3] 00:10:57.385 filename=/dev/nvme0n4 00:10:57.385 Could not set queue depth (nvme0n1) 00:10:57.385 Could not set queue depth (nvme0n2) 00:10:57.385 Could not set queue depth (nvme0n3) 00:10:57.385 Could not set queue depth (nvme0n4) 00:10:57.386 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.386 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.386 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.386 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.386 fio-3.35 00:10:57.386 Starting 4 threads 00:10:58.762 00:10:58.762 job0: (groupid=0, jobs=1): err= 0: pid=1550432: Thu Jul 25 05:30:52 2024 00:10:58.762 read: IOPS=4558, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1007msec) 00:10:58.762 slat (usec): min=2, max=18314, avg=110.34, stdev=872.50 00:10:58.762 clat (usec): min=3815, max=40639, avg=14355.11, stdev=5538.69 00:10:58.762 lat (usec): min=3819, max=45041, avg=14465.45, stdev=5618.59 00:10:58.762 clat percentiles (usec): 00:10:58.762 | 1.00th=[ 5866], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10159], 00:10:58.762 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11863], 60.00th=[13829], 00:10:58.762 | 70.00th=[15664], 80.00th=[19792], 90.00th=[23200], 95.00th=[25035], 00:10:58.762 | 99.00th=[28181], 99.50th=[29492], 99.90th=[34341], 99.95th=[34866], 00:10:58.762 | 99.99th=[40633] 00:10:58.762 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:10:58.762 slat (usec): min=3, max=18324, avg=96.92, stdev=753.96 00:10:58.762 clat (usec): min=1482, max=61190, avg=13415.85, stdev=9340.94 00:10:58.762 lat (usec): min=1529, max=61200, avg=13512.77, stdev=9392.30 00:10:58.762 clat percentiles (usec): 00:10:58.762 | 1.00th=[ 3785], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7767], 00:10:58.762 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11076], 60.00th=[11338], 00:10:58.762 | 70.00th=[13304], 80.00th=[16057], 90.00th=[19792], 95.00th=[35390], 00:10:58.762 | 99.00th=[59507], 99.50th=[60556], 99.90th=[61080], 99.95th=[61080], 00:10:58.762 | 99.99th=[61080] 00:10:58.762 bw ( KiB/s): min=16624, max=20240, per=26.94%, avg=18432.00, stdev=2556.90, samples=2 00:10:58.762 iops : min= 4156, max= 5060, avg=4608.00, stdev=639.22, samples=2 00:10:58.762 lat (msec) : 2=0.10%, 4=0.58%, 10=24.82%, 20=60.83%, 50=12.73% 00:10:58.762 lat (msec) : 100=0.95% 00:10:58.762 cpu : usr=5.96%, sys=7.16%, ctx=357, majf=0, minf=13 00:10:58.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:58.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.762 issued rwts: total=4590,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.762 job1: (groupid=0, jobs=1): err= 0: pid=1550433: Thu Jul 25 05:30:52 2024 00:10:58.762 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:10:58.762 slat (usec): min=2, max=15750, avg=113.06, stdev=899.52 00:10:58.762 clat (usec): min=3907, max=46345, avg=15030.82, stdev=6498.00 00:10:58.762 lat (usec): min=3914, max=46351, avg=15143.88, stdev=6568.48 00:10:58.762 clat percentiles (usec): 00:10:58.762 | 1.00th=[ 5866], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10290], 00:10:58.762 | 30.00th=[11076], 40.00th=[11469], 50.00th=[12649], 60.00th=[14091], 00:10:58.762 | 70.00th=[16188], 80.00th=[19530], 90.00th=[23200], 95.00th=[27657], 00:10:58.762 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[46400], 00:10:58.762 | 99.99th=[46400] 00:10:58.762 write: IOPS=4677, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1011msec); 0 zone resets 00:10:58.762 slat (usec): min=3, max=16893, avg=91.02, stdev=723.64 00:10:58.762 clat (usec): min=985, max=34271, avg=12481.83, stdev=4645.40 00:10:58.762 lat (usec): min=991, max=34283, avg=12572.85, stdev=4687.89 00:10:58.762 clat percentiles (usec): 00:10:58.762 | 1.00th=[ 3064], 5.00th=[ 5669], 10.00th=[ 7111], 20.00th=[ 9372], 00:10:58.762 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11600], 60.00th=[12125], 00:10:58.762 | 70.00th=[13304], 80.00th=[15139], 90.00th=[19792], 95.00th=[21627], 00:10:58.762 | 99.00th=[25822], 99.50th=[26870], 99.90th=[28181], 99.95th=[28181], 00:10:58.762 | 99.99th=[34341] 00:10:58.762 bw ( KiB/s): min=15936, max=20976, per=26.97%, avg=18456.00, stdev=3563.82, samples=2 00:10:58.762 iops : min= 3984, max= 5244, avg=4614.00, stdev=890.95, samples=2 00:10:58.763 lat (usec) : 1000=0.10% 00:10:58.763 lat (msec) : 2=0.07%, 4=0.87%, 10=16.60%, 20=68.11%, 50=14.26% 00:10:58.763 cpu : usr=4.75%, sys=8.32%, ctx=374, majf=0, minf=11 00:10:58.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:58.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.763 issued rwts: total=4608,4729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.763 job2: (groupid=0, jobs=1): err= 0: pid=1550434: Thu Jul 25 05:30:52 2024 00:10:58.763 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:10:58.763 slat (usec): min=2, max=27796, avg=138.04, stdev=1009.99 00:10:58.763 clat (usec): min=7448, max=75270, avg=18241.76, stdev=11719.50 00:10:58.763 lat (usec): min=7456, max=75275, avg=18379.80, stdev=11804.33 00:10:58.763 clat percentiles (usec): 00:10:58.763 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[11076], 20.00th=[12256], 00:10:58.763 | 30.00th=[12780], 40.00th=[13566], 50.00th=[14353], 60.00th=[15795], 00:10:58.763 | 70.00th=[16712], 80.00th=[17957], 90.00th=[28443], 95.00th=[51119], 00:10:58.763 | 99.00th=[73925], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:10:58.763 | 99.99th=[74974] 00:10:58.763 write: IOPS=3544, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1005msec); 0 zone resets 00:10:58.763 slat (usec): min=3, max=19259, avg=154.67, stdev=1067.55 00:10:58.763 clat (usec): min=247, max=91947, avg=19961.17, stdev=14718.76 00:10:58.763 lat (usec): min=5511, max=91953, avg=20115.84, stdev=14814.39 00:10:58.763 clat percentiles (usec): 00:10:58.763 | 1.00th=[ 6194], 5.00th=[ 9896], 10.00th=[10945], 20.00th=[12125], 00:10:58.763 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13304], 60.00th=[13960], 00:10:58.763 | 70.00th=[17433], 80.00th=[28443], 90.00th=[33817], 95.00th=[42730], 00:10:58.763 | 99.00th=[88605], 99.50th=[90702], 99.90th=[91751], 99.95th=[91751], 00:10:58.763 | 99.99th=[91751] 00:10:58.763 bw ( KiB/s): min=11640, max=15832, per=20.07%, avg=13736.00, stdev=2964.19, samples=2 00:10:58.763 iops : min= 2910, max= 3958, avg=3434.00, stdev=741.05, samples=2 00:10:58.763 lat (usec) : 250=0.02% 00:10:58.763 lat (msec) : 10=4.36%, 20=71.39%, 50=19.58%, 100=4.66% 00:10:58.763 cpu : usr=3.19%, sys=6.37%, ctx=282, majf=0, minf=13 00:10:58.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:58.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.763 issued rwts: total=3072,3562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.763 job3: (groupid=0, jobs=1): err= 0: pid=1550435: Thu Jul 25 05:30:52 2024 00:10:58.763 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:10:58.763 slat (usec): min=2, max=21512, avg=123.33, stdev=907.60 00:10:58.763 clat (usec): min=3329, max=62207, avg=15636.42, stdev=6811.23 00:10:58.763 lat (usec): min=3334, max=62220, avg=15759.75, stdev=6881.36 00:10:58.763 clat percentiles (usec): 00:10:58.763 | 1.00th=[ 5800], 5.00th=[ 8586], 10.00th=[10290], 20.00th=[11731], 00:10:58.763 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13698], 60.00th=[15008], 00:10:58.763 | 70.00th=[16450], 80.00th=[17695], 90.00th=[22414], 95.00th=[27657], 00:10:58.763 | 99.00th=[45876], 99.50th=[54789], 99.90th=[62129], 99.95th=[62129], 00:10:58.763 | 99.99th=[62129] 00:10:58.763 write: IOPS=4370, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1006msec); 0 zone resets 00:10:58.763 slat (usec): min=3, max=12290, avg=101.68, stdev=659.25 00:10:58.763 clat (usec): min=462, max=62179, avg=14404.92, stdev=7502.24 00:10:58.763 lat (usec): min=478, max=62184, avg=14506.60, stdev=7554.55 00:10:58.763 clat percentiles (usec): 00:10:58.763 | 1.00th=[ 529], 5.00th=[ 6063], 10.00th=[ 8586], 20.00th=[11207], 00:10:58.763 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12780], 60.00th=[13304], 00:10:58.763 | 70.00th=[13698], 80.00th=[14877], 90.00th=[24773], 95.00th=[26608], 00:10:58.763 | 99.00th=[46400], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:10:58.763 | 99.99th=[62129] 00:10:58.763 bw ( KiB/s): min=15352, max=18800, per=24.95%, avg=17076.00, stdev=2438.10, samples=2 00:10:58.763 iops : min= 3838, max= 4700, avg=4269.00, stdev=609.53, samples=2 00:10:58.763 lat (usec) : 500=0.26%, 750=0.53% 00:10:58.763 lat (msec) : 2=0.02%, 4=0.78%, 10=8.60%, 20=76.33%, 50=13.12% 00:10:58.763 lat (msec) : 100=0.37% 00:10:58.763 cpu : usr=4.08%, sys=6.37%, ctx=330, majf=0, minf=13 00:10:58.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:58.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.763 issued rwts: total=4096,4397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.763 00:10:58.763 Run status group 0 (all jobs): 00:10:58.763 READ: bw=63.2MiB/s (66.3MB/s), 11.9MiB/s-17.8MiB/s (12.5MB/s-18.7MB/s), io=63.9MiB (67.0MB), run=1005-1011msec 00:10:58.763 WRITE: bw=66.8MiB/s (70.1MB/s), 13.8MiB/s-18.3MiB/s (14.5MB/s-19.2MB/s), io=67.6MiB (70.8MB), run=1005-1011msec 00:10:58.763 00:10:58.763 Disk stats (read/write): 00:10:58.763 nvme0n1: ios=3997/4096, merge=0/0, ticks=46986/50273, in_queue=97259, util=91.78% 00:10:58.763 nvme0n2: ios=4145/4403, merge=0/0, ticks=53112/50746, in_queue=103858, util=88.43% 00:10:58.763 nvme0n3: ios=2552/2560, merge=0/0, ticks=21609/27012, in_queue=48621, util=91.35% 00:10:58.763 nvme0n4: ios=3389/3584, merge=0/0, ticks=43736/39471, in_queue=83207, util=99.90% 00:10:58.763 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:58.763 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1550576 00:10:58.763 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:58.763 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:58.763 [global] 00:10:58.763 thread=1 00:10:58.763 invalidate=1 00:10:58.763 rw=read 00:10:58.763 time_based=1 00:10:58.763 runtime=10 00:10:58.763 ioengine=libaio 00:10:58.763 direct=1 00:10:58.763 bs=4096 00:10:58.763 iodepth=1 00:10:58.763 norandommap=1 00:10:58.763 numjobs=1 00:10:58.763 00:10:58.763 [job0] 00:10:58.763 filename=/dev/nvme0n1 00:10:58.763 [job1] 00:10:58.763 filename=/dev/nvme0n2 00:10:58.763 [job2] 00:10:58.763 filename=/dev/nvme0n3 00:10:58.763 [job3] 00:10:58.763 filename=/dev/nvme0n4 00:10:58.763 Could not set queue depth (nvme0n1) 00:10:58.763 Could not set queue depth (nvme0n2) 00:10:58.763 Could not set queue depth (nvme0n3) 00:10:58.763 Could not set queue depth (nvme0n4) 00:10:58.763 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.763 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.763 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.763 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.763 fio-3.35 00:10:58.763 Starting 4 threads 00:11:02.038 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:02.038 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:02.038 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=331776, buflen=4096 00:11:02.038 fio: pid=1550788, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:02.296 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.296 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:02.296 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=319488, buflen=4096 00:11:02.296 fio: pid=1550787, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:02.554 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=36597760, buflen=4096 00:11:02.554 fio: pid=1550785, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:02.554 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.554 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:02.819 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=41910272, buflen=4096 00:11:02.819 fio: pid=1550786, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:02.819 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.819 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:02.819 00:11:02.819 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1550785: Thu Jul 25 05:30:56 2024 00:11:02.819 read: IOPS=2580, BW=10.1MiB/s (10.6MB/s)(34.9MiB/3463msec) 00:11:02.819 slat (usec): min=4, max=15592, avg=21.91, stdev=282.31 00:11:02.819 clat (usec): min=256, max=1621, avg=360.41, stdev=63.38 00:11:02.819 lat (usec): min=265, max=16064, avg=382.32, stdev=291.79 00:11:02.819 clat percentiles (usec): 00:11:02.819 | 1.00th=[ 273], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 310], 00:11:02.819 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 367], 00:11:02.819 | 70.00th=[ 388], 80.00th=[ 408], 90.00th=[ 449], 95.00th=[ 482], 00:11:02.819 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 586], 99.95th=[ 799], 00:11:02.819 | 99.99th=[ 1614] 00:11:02.819 bw ( KiB/s): min= 9424, max=12440, per=51.46%, avg=10646.67, stdev=1333.85, samples=6 00:11:02.819 iops : min= 2356, max= 3110, avg=2661.67, stdev=333.46, samples=6 00:11:02.819 lat (usec) : 500=97.05%, 750=2.86%, 1000=0.07% 00:11:02.819 lat (msec) : 2=0.01% 00:11:02.819 cpu : usr=2.14%, sys=5.26%, ctx=8941, majf=0, minf=1 00:11:02.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.819 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.819 issued rwts: total=8936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.819 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1550786: Thu Jul 25 05:30:56 2024 00:11:02.819 read: IOPS=2738, BW=10.7MiB/s (11.2MB/s)(40.0MiB/3737msec) 00:11:02.819 slat (usec): min=4, max=16700, avg=19.29, stdev=293.85 00:11:02.819 clat (usec): min=236, max=43193, avg=340.34, stdev=735.61 00:11:02.819 lat (usec): min=241, max=53085, avg=359.63, stdev=843.10 00:11:02.819 clat percentiles (usec): 00:11:02.819 | 1.00th=[ 249], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:11:02.819 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 322], 00:11:02.819 | 70.00th=[ 334], 80.00th=[ 359], 90.00th=[ 383], 95.00th=[ 424], 00:11:02.819 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 955], 99.95th=[ 5276], 00:11:02.819 | 99.99th=[41157] 00:11:02.819 bw ( KiB/s): min= 8193, max=12672, per=53.34%, avg=11035.57, stdev=1442.13, samples=7 00:11:02.819 iops : min= 2048, max= 3168, avg=2758.86, stdev=360.62, samples=7 00:11:02.819 lat (usec) : 250=1.20%, 500=97.54%, 750=1.07%, 1000=0.12% 00:11:02.819 lat (msec) : 2=0.01%, 10=0.01%, 20=0.02%, 50=0.03% 00:11:02.819 cpu : usr=2.06%, sys=5.51%, ctx=10239, majf=0, minf=1 00:11:02.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.819 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.819 issued rwts: total=10233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.819 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1550787: Thu Jul 25 05:30:56 2024 00:11:02.819 read: IOPS=24, BW=97.3KiB/s (99.7kB/s)(312KiB/3205msec) 00:11:02.819 slat (nsec): min=9294, max=29285, avg=13722.23, stdev=2701.21 00:11:02.819 clat (usec): min=533, max=42082, avg=40783.92, stdev=4640.75 00:11:02.819 lat (usec): min=562, max=42094, avg=40797.63, stdev=4638.97 00:11:02.819 clat percentiles (usec): 00:11:02.819 | 1.00th=[ 537], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:02.819 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:02.819 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:02.819 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:02.819 | 99.99th=[42206] 00:11:02.819 bw ( KiB/s): min= 96, max= 104, per=0.47%, avg=97.33, stdev= 3.27, samples=6 00:11:02.819 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:11:02.819 lat (usec) : 750=1.27% 00:11:02.819 lat (msec) : 50=97.47% 00:11:02.819 cpu : usr=0.06%, sys=0.00%, ctx=81, majf=0, minf=1 00:11:02.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.820 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.820 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.820 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1550788: Thu Jul 25 05:30:56 2024 00:11:02.820 read: IOPS=27, BW=111KiB/s (113kB/s)(324KiB/2932msec) 00:11:02.820 slat (nsec): min=15024, max=60973, avg=25953.15, stdev=10637.61 00:11:02.820 clat (usec): min=464, max=42350, avg=35889.06, stdev=14090.94 00:11:02.820 lat (usec): min=501, max=42366, avg=35914.88, stdev=14087.39 00:11:02.820 clat percentiles (usec): 00:11:02.820 | 1.00th=[ 465], 5.00th=[ 570], 10.00th=[ 635], 20.00th=[41157], 00:11:02.820 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:02.820 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:02.820 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:02.820 | 99.99th=[42206] 00:11:02.820 bw ( KiB/s): min= 96, max= 152, per=0.55%, avg=113.60, stdev=22.20, samples=5 00:11:02.820 iops : min= 24, max= 38, avg=28.40, stdev= 5.55, samples=5 00:11:02.820 lat (usec) : 500=2.44%, 750=10.98% 00:11:02.820 lat (msec) : 50=85.37% 00:11:02.820 cpu : usr=0.00%, sys=0.14%, ctx=83, majf=0, minf=1 00:11:02.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.820 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.820 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.820 00:11:02.820 Run status group 0 (all jobs): 00:11:02.820 READ: bw=20.2MiB/s (21.2MB/s), 97.3KiB/s-10.7MiB/s (99.7kB/s-11.2MB/s), io=75.5MiB (79.2MB), run=2932-3737msec 00:11:02.820 00:11:02.820 Disk stats (read/write): 00:11:02.820 nvme0n1: ios=8741/0, merge=0/0, ticks=2994/0, in_queue=2994, util=95.14% 00:11:02.820 nvme0n2: ios=9859/0, merge=0/0, ticks=3145/0, in_queue=3145, util=94.94% 00:11:02.820 nvme0n3: ios=118/0, merge=0/0, ticks=3910/0, in_queue=3910, util=99.75% 00:11:02.820 nvme0n4: ios=125/0, merge=0/0, ticks=3839/0, in_queue=3839, util=99.66% 00:11:03.104 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.104 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:03.104 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.104 05:30:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:03.364 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.364 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:03.930 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.930 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:03.930 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:03.930 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1550576 00:11:03.930 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:03.930 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:04.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.210 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:04.210 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:04.210 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:04.210 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.210 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:04.210 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.210 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:04.210 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:04.210 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:04.210 nvmf hotplug test: fio failed as expected 00:11:04.210 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.468 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:04.468 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:04.468 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:04.468 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:04.468 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:04.468 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:04.468 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:04.468 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:04.469 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:04.469 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:04.469 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:04.469 rmmod nvme_tcp 00:11:04.469 rmmod nvme_fabrics 00:11:04.469 rmmod nvme_keyring 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1548644 ']' 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1548644 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1548644 ']' 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1548644 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1548644 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1548644' 00:11:04.469 killing process with pid 1548644 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1548644 00:11:04.469 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1548644 00:11:04.726 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:04.726 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:04.726 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:04.726 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.726 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:04.726 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.726 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.726 05:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:07.256 00:11:07.256 real 0m23.344s 00:11:07.256 user 1m22.355s 00:11:07.256 sys 0m6.641s 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.256 ************************************ 00:11:07.256 END TEST nvmf_fio_target 00:11:07.256 ************************************ 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:07.256 ************************************ 00:11:07.256 START TEST nvmf_bdevio 00:11:07.256 ************************************ 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:07.256 * Looking for test storage... 00:11:07.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:07.256 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.156 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:09.157 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:09.157 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:09.157 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:09.157 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:09.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:11:09.157 00:11:09.157 --- 10.0.0.2 ping statistics --- 00:11:09.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.157 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:11:09.157 00:11:09.157 --- 10.0.0.1 ping statistics --- 00:11:09.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.157 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1553411 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1553411 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1553411 ']' 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.157 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.157 [2024-07-25 05:31:02.645473] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:11:09.158 [2024-07-25 05:31:02.645557] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.158 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.158 [2024-07-25 05:31:02.714508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.158 [2024-07-25 05:31:02.815790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.158 [2024-07-25 05:31:02.815858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.158 [2024-07-25 05:31:02.815875] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.158 [2024-07-25 05:31:02.815889] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.158 [2024-07-25 05:31:02.815901] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.158 [2024-07-25 05:31:02.815997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:09.158 [2024-07-25 05:31:02.816053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:09.158 [2024-07-25 05:31:02.816110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:09.158 [2024-07-25 05:31:02.816112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.416 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.416 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:09.416 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.416 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.416 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.416 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.416 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.416 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.416 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.416 [2024-07-25 05:31:02.969484] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.416 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.416 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.416 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.416 05:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.416 Malloc0 00:11:09.416 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.416 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.416 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.416 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.416 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.416 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.417 [2024-07-25 05:31:03.023124] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:09.417 { 00:11:09.417 "params": { 00:11:09.417 "name": "Nvme$subsystem", 00:11:09.417 "trtype": "$TEST_TRANSPORT", 00:11:09.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.417 "adrfam": "ipv4", 00:11:09.417 "trsvcid": "$NVMF_PORT", 00:11:09.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.417 "hdgst": ${hdgst:-false}, 00:11:09.417 "ddgst": ${ddgst:-false} 00:11:09.417 }, 00:11:09.417 "method": "bdev_nvme_attach_controller" 00:11:09.417 } 00:11:09.417 EOF 00:11:09.417 )") 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:09.417 05:31:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:09.417 "params": { 00:11:09.417 "name": "Nvme1", 00:11:09.417 "trtype": "tcp", 00:11:09.417 "traddr": "10.0.0.2", 00:11:09.417 "adrfam": "ipv4", 00:11:09.417 "trsvcid": "4420", 00:11:09.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.417 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.417 "hdgst": false, 00:11:09.417 "ddgst": false 00:11:09.417 }, 00:11:09.417 "method": "bdev_nvme_attach_controller" 00:11:09.417 }' 00:11:09.417 [2024-07-25 05:31:03.069626] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:11:09.417 [2024-07-25 05:31:03.069715] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553440 ] 00:11:09.417 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.675 [2024-07-25 05:31:03.130614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:09.675 [2024-07-25 05:31:03.222236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.675 [2024-07-25 05:31:03.222290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.675 [2024-07-25 05:31:03.222294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.932 I/O targets: 00:11:09.932 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:09.932 00:11:09.932 00:11:09.932 CUnit - A unit testing framework for C - Version 2.1-3 00:11:09.932 http://cunit.sourceforge.net/ 00:11:09.932 00:11:09.932 00:11:09.932 Suite: bdevio tests on: Nvme1n1 00:11:09.932 Test: blockdev write read block ...passed 00:11:09.932 Test: blockdev write zeroes read block ...passed 00:11:09.932 Test: blockdev write zeroes read no split ...passed 00:11:10.189 Test: blockdev write zeroes read split ...passed 00:11:10.189 Test: blockdev write zeroes read split partial ...passed 00:11:10.189 Test: blockdev reset ...[2024-07-25 05:31:03.690074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:10.189 [2024-07-25 05:31:03.690180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee7a60 (9): Bad file descriptor 00:11:10.189 [2024-07-25 05:31:03.705196] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:10.189 passed 00:11:10.189 Test: blockdev write read 8 blocks ...passed 00:11:10.189 Test: blockdev write read size > 128k ...passed 00:11:10.189 Test: blockdev write read invalid size ...passed 00:11:10.189 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:10.189 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:10.189 Test: blockdev write read max offset ...passed 00:11:10.189 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:10.189 Test: blockdev writev readv 8 blocks ...passed 00:11:10.189 Test: blockdev writev readv 30 x 1block ...passed 00:11:10.447 Test: blockdev writev readv block ...passed 00:11:10.447 Test: blockdev writev readv size > 128k ...passed 00:11:10.447 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:10.447 Test: blockdev comparev and writev ...[2024-07-25 05:31:03.917145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.447 [2024-07-25 05:31:03.917181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:10.447 [2024-07-25 05:31:03.917206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.447 [2024-07-25 05:31:03.917223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:10.447 [2024-07-25 05:31:03.917580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.447 [2024-07-25 05:31:03.917607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:10.447 [2024-07-25 05:31:03.917637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.447 [2024-07-25 05:31:03.917654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:10.447 [2024-07-25 05:31:03.918000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.447 [2024-07-25 05:31:03.918025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:10.447 [2024-07-25 05:31:03.918047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.447 [2024-07-25 05:31:03.918063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:10.447 [2024-07-25 05:31:03.918411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.447 [2024-07-25 05:31:03.918437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:10.447 [2024-07-25 05:31:03.918458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.447 [2024-07-25 05:31:03.918476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:10.447 passed 00:11:10.447 Test: blockdev nvme passthru rw ...passed 00:11:10.447 Test: blockdev nvme passthru vendor specific ...[2024-07-25 05:31:04.000551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:10.447 [2024-07-25 05:31:04.000579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:10.447 [2024-07-25 05:31:04.000755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:10.447 [2024-07-25 05:31:04.000779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:10.447 [2024-07-25 05:31:04.000958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:10.447 [2024-07-25 05:31:04.000983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:10.447 [2024-07-25 05:31:04.001163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:10.447 [2024-07-25 05:31:04.001187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:10.447 passed 00:11:10.447 Test: blockdev nvme admin passthru ...passed 00:11:10.447 Test: blockdev copy ...passed 00:11:10.447 00:11:10.447 Run Summary: Type Total Ran Passed Failed Inactive 00:11:10.447 suites 1 1 n/a 0 0 00:11:10.447 tests 23 23 23 0 0 00:11:10.447 asserts 152 152 152 0 n/a 00:11:10.447 00:11:10.447 Elapsed time = 1.154 seconds 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:10.705 rmmod nvme_tcp 00:11:10.705 rmmod nvme_fabrics 00:11:10.705 rmmod nvme_keyring 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1553411 ']' 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1553411 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1553411 ']' 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1553411 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1553411 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1553411' 00:11:10.705 killing process with pid 1553411 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1553411 00:11:10.705 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1553411 00:11:10.963 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:10.963 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:10.963 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:10.963 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.963 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:10.963 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.963 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.963 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:13.491 00:11:13.491 real 0m6.238s 00:11:13.491 user 0m10.108s 00:11:13.491 sys 0m2.015s 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.491 ************************************ 00:11:13.491 END TEST nvmf_bdevio 00:11:13.491 ************************************ 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:13.491 00:11:13.491 real 3m49.600s 00:11:13.491 user 9m55.918s 00:11:13.491 sys 1m6.760s 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:13.491 ************************************ 00:11:13.491 END TEST nvmf_target_core 00:11:13.491 ************************************ 00:11:13.491 05:31:06 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:13.491 05:31:06 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:13.491 05:31:06 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.491 05:31:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:13.491 ************************************ 00:11:13.491 START TEST nvmf_target_extra 00:11:13.491 ************************************ 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:13.491 * Looking for test storage... 00:11:13.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.491 05:31:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:13.492 ************************************ 00:11:13.492 START TEST nvmf_example 00:11:13.492 ************************************ 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:13.492 * Looking for test storage... 00:11:13.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:13.492 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:15.393 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:15.393 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:15.393 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:15.393 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:15.393 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:15.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:11:15.394 00:11:15.394 --- 10.0.0.2 ping statistics --- 00:11:15.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.394 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:11:15.394 00:11:15.394 --- 10.0.0.1 ping statistics --- 00:11:15.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.394 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1555561 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1555561 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1555561 ']' 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.394 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.394 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.652 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.652 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:15.652 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:15.652 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.652 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.653 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.653 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.653 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.653 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.653 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:15.653 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.653 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:15.911 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:15.911 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.905 Initializing NVMe Controllers 00:11:25.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:25.905 Initialization complete. Launching workers. 00:11:25.905 ======================================================== 00:11:25.905 Latency(us) 00:11:25.905 Device Information : IOPS MiB/s Average min max 00:11:25.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13847.09 54.09 4623.43 693.50 17092.97 00:11:25.905 ======================================================== 00:11:25.905 Total : 13847.09 54.09 4623.43 693.50 17092.97 00:11:25.905 00:11:25.905 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:25.905 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:25.905 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:25.905 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:25.905 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.906 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:25.906 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.906 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.906 rmmod nvme_tcp 00:11:25.906 rmmod nvme_fabrics 00:11:25.906 rmmod nvme_keyring 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1555561 ']' 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1555561 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1555561 ']' 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1555561 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1555561 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1555561' 00:11:26.164 killing process with pid 1555561 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1555561 00:11:26.164 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1555561 00:11:26.423 nvmf threads initialize successfully 00:11:26.423 bdev subsystem init successfully 00:11:26.423 created a nvmf target service 00:11:26.423 create targets's poll groups done 00:11:26.423 all subsystems of target started 00:11:26.423 nvmf target is running 00:11:26.423 all subsystems of target stopped 00:11:26.423 destroy targets's poll groups done 00:11:26.423 destroyed the nvmf target service 00:11:26.423 bdev subsystem finish successfully 00:11:26.423 nvmf threads destroy successfully 00:11:26.423 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:26.423 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:26.423 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:26.423 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.423 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.423 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.423 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.423 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.324 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:28.324 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:28.324 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.324 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.324 00:11:28.324 real 0m15.125s 00:11:28.324 user 0m41.277s 00:11:28.324 sys 0m3.624s 00:11:28.324 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.324 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.324 ************************************ 00:11:28.324 END TEST nvmf_example 00:11:28.324 ************************************ 00:11:28.324 05:31:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:28.324 05:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:28.324 05:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.324 05:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.324 ************************************ 00:11:28.324 START TEST nvmf_filesystem 00:11:28.324 ************************************ 00:11:28.324 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:28.585 * Looking for test storage... 00:11:28.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:28.585 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:28.586 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:28.586 #define SPDK_CONFIG_H 00:11:28.586 #define SPDK_CONFIG_APPS 1 00:11:28.586 #define SPDK_CONFIG_ARCH native 00:11:28.586 #undef SPDK_CONFIG_ASAN 00:11:28.586 #undef SPDK_CONFIG_AVAHI 00:11:28.586 #undef SPDK_CONFIG_CET 00:11:28.586 #define SPDK_CONFIG_COVERAGE 1 00:11:28.586 #define SPDK_CONFIG_CROSS_PREFIX 00:11:28.586 #undef SPDK_CONFIG_CRYPTO 00:11:28.586 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:28.586 #undef SPDK_CONFIG_CUSTOMOCF 00:11:28.586 #undef SPDK_CONFIG_DAOS 00:11:28.586 #define SPDK_CONFIG_DAOS_DIR 00:11:28.586 #define SPDK_CONFIG_DEBUG 1 00:11:28.586 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:28.586 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:28.586 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:28.586 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:28.586 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:28.586 #undef SPDK_CONFIG_DPDK_UADK 00:11:28.586 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:28.587 #define SPDK_CONFIG_EXAMPLES 1 00:11:28.587 #undef SPDK_CONFIG_FC 00:11:28.587 #define SPDK_CONFIG_FC_PATH 00:11:28.587 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:28.587 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:28.587 #undef SPDK_CONFIG_FUSE 00:11:28.587 #undef SPDK_CONFIG_FUZZER 00:11:28.587 #define SPDK_CONFIG_FUZZER_LIB 00:11:28.587 #undef SPDK_CONFIG_GOLANG 00:11:28.587 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:28.587 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:28.587 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:28.587 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:28.587 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:28.587 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:28.587 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:28.587 #define SPDK_CONFIG_IDXD 1 00:11:28.587 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:28.587 #undef SPDK_CONFIG_IPSEC_MB 00:11:28.587 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:28.587 #define SPDK_CONFIG_ISAL 1 00:11:28.587 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:28.587 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:28.587 #define SPDK_CONFIG_LIBDIR 00:11:28.587 #undef SPDK_CONFIG_LTO 00:11:28.587 #define SPDK_CONFIG_MAX_LCORES 128 00:11:28.587 #define SPDK_CONFIG_NVME_CUSE 1 00:11:28.587 #undef SPDK_CONFIG_OCF 00:11:28.587 #define SPDK_CONFIG_OCF_PATH 00:11:28.587 #define SPDK_CONFIG_OPENSSL_PATH 00:11:28.587 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:28.587 #define SPDK_CONFIG_PGO_DIR 00:11:28.587 #undef SPDK_CONFIG_PGO_USE 00:11:28.587 #define SPDK_CONFIG_PREFIX /usr/local 00:11:28.587 #undef SPDK_CONFIG_RAID5F 00:11:28.587 #undef SPDK_CONFIG_RBD 00:11:28.587 #define SPDK_CONFIG_RDMA 1 00:11:28.587 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:28.587 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:28.587 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:28.587 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:28.587 #define SPDK_CONFIG_SHARED 1 00:11:28.587 #undef SPDK_CONFIG_SMA 00:11:28.587 #define SPDK_CONFIG_TESTS 1 00:11:28.587 #undef SPDK_CONFIG_TSAN 00:11:28.587 #define SPDK_CONFIG_UBLK 1 00:11:28.587 #define SPDK_CONFIG_UBSAN 1 00:11:28.587 #undef SPDK_CONFIG_UNIT_TESTS 00:11:28.587 #undef SPDK_CONFIG_URING 00:11:28.587 #define SPDK_CONFIG_URING_PATH 00:11:28.587 #undef SPDK_CONFIG_URING_ZNS 00:11:28.587 #undef SPDK_CONFIG_USDT 00:11:28.587 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:28.587 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:28.587 #define SPDK_CONFIG_VFIO_USER 1 00:11:28.587 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:28.587 #define SPDK_CONFIG_VHOST 1 00:11:28.587 #define SPDK_CONFIG_VIRTIO 1 00:11:28.587 #undef SPDK_CONFIG_VTUNE 00:11:28.587 #define SPDK_CONFIG_VTUNE_DIR 00:11:28.587 #define SPDK_CONFIG_WERROR 1 00:11:28.587 #define SPDK_CONFIG_WPDK_DIR 00:11:28.587 #undef SPDK_CONFIG_XNVME 00:11:28.587 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:28.587 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:28.588 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1557249 ]] 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1557249 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.AeQc6E 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:28.589 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.AeQc6E/tests/target /tmp/spdk.AeQc6E 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953643008 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330786816 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=53339115520 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994729472 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=8655613952 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30935183360 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997364736 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376539136 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398948352 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22409216 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996398080 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997364736 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=966656 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199468032 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199472128 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:28.590 * Looking for test storage... 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=53339115520 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=10870206464 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:28.590 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.591 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:31.119 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.119 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:31.120 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:31.120 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:31.120 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:31.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:11:31.120 00:11:31.120 --- 10.0.0.2 ping statistics --- 00:11:31.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.120 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:11:31.120 00:11:31.120 --- 10.0.0.1 ping statistics --- 00:11:31.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.120 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.120 ************************************ 00:11:31.120 START TEST nvmf_filesystem_no_in_capsule 00:11:31.120 ************************************ 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1558869 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1558869 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1558869 ']' 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.120 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.121 [2024-07-25 05:31:24.440992] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:11:31.121 [2024-07-25 05:31:24.441080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.121 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.121 [2024-07-25 05:31:24.505997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.121 [2024-07-25 05:31:24.598992] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.121 [2024-07-25 05:31:24.599061] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.121 [2024-07-25 05:31:24.599075] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.121 [2024-07-25 05:31:24.599087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.121 [2024-07-25 05:31:24.599097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.121 [2024-07-25 05:31:24.599389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.121 [2024-07-25 05:31:24.602264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.121 [2024-07-25 05:31:24.602306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.121 [2024-07-25 05:31:24.602310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.121 [2024-07-25 05:31:24.755846] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.121 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.379 Malloc1 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.379 [2024-07-25 05:31:24.937410] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:31.379 { 00:11:31.379 "name": "Malloc1", 00:11:31.379 "aliases": [ 00:11:31.379 "848245ad-53c0-424a-ac80-29b3d5fe8671" 00:11:31.379 ], 00:11:31.379 "product_name": "Malloc disk", 00:11:31.379 "block_size": 512, 00:11:31.379 "num_blocks": 1048576, 00:11:31.379 "uuid": "848245ad-53c0-424a-ac80-29b3d5fe8671", 00:11:31.379 "assigned_rate_limits": { 00:11:31.379 "rw_ios_per_sec": 0, 00:11:31.379 "rw_mbytes_per_sec": 0, 00:11:31.379 "r_mbytes_per_sec": 0, 00:11:31.379 "w_mbytes_per_sec": 0 00:11:31.379 }, 00:11:31.379 "claimed": true, 00:11:31.379 "claim_type": "exclusive_write", 00:11:31.379 "zoned": false, 00:11:31.379 "supported_io_types": { 00:11:31.379 "read": true, 00:11:31.379 "write": true, 00:11:31.379 "unmap": true, 00:11:31.379 "flush": true, 00:11:31.379 "reset": true, 00:11:31.379 "nvme_admin": false, 00:11:31.379 "nvme_io": false, 00:11:31.379 "nvme_io_md": false, 00:11:31.379 "write_zeroes": true, 00:11:31.379 "zcopy": true, 00:11:31.379 "get_zone_info": false, 00:11:31.379 "zone_management": false, 00:11:31.379 "zone_append": false, 00:11:31.379 "compare": false, 00:11:31.379 "compare_and_write": false, 00:11:31.379 "abort": true, 00:11:31.379 "seek_hole": false, 00:11:31.379 "seek_data": false, 00:11:31.379 "copy": true, 00:11:31.379 "nvme_iov_md": false 00:11:31.379 }, 00:11:31.379 "memory_domains": [ 00:11:31.379 { 00:11:31.379 "dma_device_id": "system", 00:11:31.379 "dma_device_type": 1 00:11:31.379 }, 00:11:31.379 { 00:11:31.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.379 "dma_device_type": 2 00:11:31.379 } 00:11:31.379 ], 00:11:31.379 "driver_specific": {} 00:11:31.379 } 00:11:31.379 ]' 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:31.379 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:31.380 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:31.380 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:31.380 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:31.380 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:31.380 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:31.380 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:31.945 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:31.945 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:31.945 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.945 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:31.945 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:34.467 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:34.468 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:34.468 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:35.399 05:31:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.331 ************************************ 00:11:36.331 START TEST filesystem_ext4 00:11:36.331 ************************************ 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:36.331 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:36.331 mke2fs 1.46.5 (30-Dec-2021) 00:11:36.589 Discarding device blocks: 0/522240 done 00:11:36.589 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:36.589 Filesystem UUID: ab3f1849-24f1-49ee-a222-024014758a82 00:11:36.589 Superblock backups stored on blocks: 00:11:36.589 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:36.589 00:11:36.589 Allocating group tables: 0/64 done 00:11:36.589 Writing inode tables: 0/64 done 00:11:36.589 Creating journal (8192 blocks): done 00:11:36.589 Writing superblocks and filesystem accounting information: 0/64 done 00:11:36.589 00:11:36.589 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:36.589 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.154 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1558869 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.412 00:11:37.412 real 0m0.959s 00:11:37.412 user 0m0.012s 00:11:37.412 sys 0m0.059s 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:37.412 ************************************ 00:11:37.412 END TEST filesystem_ext4 00:11:37.412 ************************************ 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.412 ************************************ 00:11:37.412 START TEST filesystem_btrfs 00:11:37.412 ************************************ 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:37.412 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:37.670 btrfs-progs v6.6.2 00:11:37.670 See https://btrfs.readthedocs.io for more information. 00:11:37.670 00:11:37.670 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:37.670 NOTE: several default settings have changed in version 5.15, please make sure 00:11:37.670 this does not affect your deployments: 00:11:37.670 - DUP for metadata (-m dup) 00:11:37.670 - enabled no-holes (-O no-holes) 00:11:37.670 - enabled free-space-tree (-R free-space-tree) 00:11:37.670 00:11:37.670 Label: (null) 00:11:37.670 UUID: 5f861a0e-6047-42d5-a9f8-a252a09a5d17 00:11:37.670 Node size: 16384 00:11:37.670 Sector size: 4096 00:11:37.670 Filesystem size: 510.00MiB 00:11:37.670 Block group profiles: 00:11:37.670 Data: single 8.00MiB 00:11:37.670 Metadata: DUP 32.00MiB 00:11:37.670 System: DUP 8.00MiB 00:11:37.670 SSD detected: yes 00:11:37.670 Zoned device: no 00:11:37.670 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:37.670 Runtime features: free-space-tree 00:11:37.670 Checksum: crc32c 00:11:37.670 Number of devices: 1 00:11:37.670 Devices: 00:11:37.670 ID SIZE PATH 00:11:37.670 1 510.00MiB /dev/nvme0n1p1 00:11:37.670 00:11:37.670 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:37.670 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.928 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.928 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:37.928 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.928 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:37.928 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:37.928 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:38.185 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1558869 00:11:38.185 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:38.185 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:38.185 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:38.185 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:38.185 00:11:38.185 real 0m0.673s 00:11:38.185 user 0m0.013s 00:11:38.185 sys 0m0.123s 00:11:38.185 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.185 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:38.185 ************************************ 00:11:38.185 END TEST filesystem_btrfs 00:11:38.185 ************************************ 00:11:38.185 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.186 ************************************ 00:11:38.186 START TEST filesystem_xfs 00:11:38.186 ************************************ 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:38.186 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:38.186 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:38.186 = sectsz=512 attr=2, projid32bit=1 00:11:38.186 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:38.186 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:38.186 data = bsize=4096 blocks=130560, imaxpct=25 00:11:38.186 = sunit=0 swidth=0 blks 00:11:38.186 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:38.186 log =internal log bsize=4096 blocks=16384, version=2 00:11:38.186 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:38.186 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:39.153 Discarding blocks...Done. 00:11:39.153 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:39.153 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1558869 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:41.678 00:11:41.678 real 0m3.496s 00:11:41.678 user 0m0.017s 00:11:41.678 sys 0m0.059s 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:41.678 ************************************ 00:11:41.678 END TEST filesystem_xfs 00:11:41.678 ************************************ 00:11:41.678 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1558869 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1558869 ']' 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1558869 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1558869 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1558869' 00:11:41.937 killing process with pid 1558869 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1558869 00:11:41.937 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1558869 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:42.504 00:11:42.504 real 0m11.688s 00:11:42.504 user 0m44.786s 00:11:42.504 sys 0m1.797s 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 ************************************ 00:11:42.504 END TEST nvmf_filesystem_no_in_capsule 00:11:42.504 ************************************ 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 ************************************ 00:11:42.504 START TEST nvmf_filesystem_in_capsule 00:11:42.504 ************************************ 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1560427 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1560427 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1560427 ']' 00:11:42.504 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.505 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:42.505 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.505 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:42.505 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.505 [2024-07-25 05:31:36.186714] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:11:42.505 [2024-07-25 05:31:36.186812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.763 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.763 [2024-07-25 05:31:36.253130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.763 [2024-07-25 05:31:36.343656] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.763 [2024-07-25 05:31:36.343720] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.763 [2024-07-25 05:31:36.343746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.763 [2024-07-25 05:31:36.343760] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.763 [2024-07-25 05:31:36.343772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.763 [2024-07-25 05:31:36.343853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.763 [2024-07-25 05:31:36.343910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.763 [2024-07-25 05:31:36.344023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.763 [2024-07-25 05:31:36.344025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.021 [2024-07-25 05:31:36.491573] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.021 Malloc1 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.021 [2024-07-25 05:31:36.662034] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.021 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:43.021 { 00:11:43.021 "name": "Malloc1", 00:11:43.021 "aliases": [ 00:11:43.021 "c8ed0708-7ca5-4ada-a53d-57ea83ed0d8f" 00:11:43.021 ], 00:11:43.021 "product_name": "Malloc disk", 00:11:43.021 "block_size": 512, 00:11:43.021 "num_blocks": 1048576, 00:11:43.021 "uuid": "c8ed0708-7ca5-4ada-a53d-57ea83ed0d8f", 00:11:43.021 "assigned_rate_limits": { 00:11:43.021 "rw_ios_per_sec": 0, 00:11:43.021 "rw_mbytes_per_sec": 0, 00:11:43.021 "r_mbytes_per_sec": 0, 00:11:43.021 "w_mbytes_per_sec": 0 00:11:43.021 }, 00:11:43.021 "claimed": true, 00:11:43.021 "claim_type": "exclusive_write", 00:11:43.021 "zoned": false, 00:11:43.021 "supported_io_types": { 00:11:43.021 "read": true, 00:11:43.021 "write": true, 00:11:43.021 "unmap": true, 00:11:43.021 "flush": true, 00:11:43.021 "reset": true, 00:11:43.021 "nvme_admin": false, 00:11:43.021 "nvme_io": false, 00:11:43.021 "nvme_io_md": false, 00:11:43.021 "write_zeroes": true, 00:11:43.021 "zcopy": true, 00:11:43.021 "get_zone_info": false, 00:11:43.021 "zone_management": false, 00:11:43.021 "zone_append": false, 00:11:43.022 "compare": false, 00:11:43.022 "compare_and_write": false, 00:11:43.022 "abort": true, 00:11:43.022 "seek_hole": false, 00:11:43.022 "seek_data": false, 00:11:43.022 "copy": true, 00:11:43.022 "nvme_iov_md": false 00:11:43.022 }, 00:11:43.022 "memory_domains": [ 00:11:43.022 { 00:11:43.022 "dma_device_id": "system", 00:11:43.022 "dma_device_type": 1 00:11:43.022 }, 00:11:43.022 { 00:11:43.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.022 "dma_device_type": 2 00:11:43.022 } 00:11:43.022 ], 00:11:43.022 "driver_specific": {} 00:11:43.022 } 00:11:43.022 ]' 00:11:43.022 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:43.022 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:43.022 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:43.279 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:43.279 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:43.279 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:43.279 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:43.279 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.845 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:43.845 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:43.845 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.845 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:43.845 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:45.742 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:45.742 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:45.742 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:45.999 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:46.256 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:47.189 05:31:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.122 ************************************ 00:11:48.122 START TEST filesystem_in_capsule_ext4 00:11:48.122 ************************************ 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:48.122 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:48.122 mke2fs 1.46.5 (30-Dec-2021) 00:11:48.379 Discarding device blocks: 0/522240 done 00:11:48.379 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:48.379 Filesystem UUID: 25335dfc-ebac-450f-8b66-b8524c78f82b 00:11:48.379 Superblock backups stored on blocks: 00:11:48.379 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:48.379 00:11:48.379 Allocating group tables: 0/64 done 00:11:48.379 Writing inode tables: 0/64 done 00:11:48.636 Creating journal (8192 blocks): done 00:11:48.636 Writing superblocks and filesystem accounting information: 0/64 done 00:11:48.636 00:11:48.636 05:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:48.636 05:31:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1560427 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.569 00:11:49.569 real 0m1.384s 00:11:49.569 user 0m0.015s 00:11:49.569 sys 0m0.053s 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:49.569 ************************************ 00:11:49.569 END TEST filesystem_in_capsule_ext4 00:11:49.569 ************************************ 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.569 ************************************ 00:11:49.569 START TEST filesystem_in_capsule_btrfs 00:11:49.569 ************************************ 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:49.569 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:49.832 btrfs-progs v6.6.2 00:11:49.832 See https://btrfs.readthedocs.io for more information. 00:11:49.832 00:11:49.832 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:49.832 NOTE: several default settings have changed in version 5.15, please make sure 00:11:49.832 this does not affect your deployments: 00:11:49.832 - DUP for metadata (-m dup) 00:11:49.832 - enabled no-holes (-O no-holes) 00:11:49.832 - enabled free-space-tree (-R free-space-tree) 00:11:49.832 00:11:49.832 Label: (null) 00:11:49.832 UUID: c182550b-5eca-41c1-9a2d-6eb57edd46dc 00:11:49.832 Node size: 16384 00:11:49.832 Sector size: 4096 00:11:49.832 Filesystem size: 510.00MiB 00:11:49.832 Block group profiles: 00:11:49.832 Data: single 8.00MiB 00:11:49.832 Metadata: DUP 32.00MiB 00:11:49.832 System: DUP 8.00MiB 00:11:49.832 SSD detected: yes 00:11:49.832 Zoned device: no 00:11:49.832 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:49.832 Runtime features: free-space-tree 00:11:49.832 Checksum: crc32c 00:11:49.832 Number of devices: 1 00:11:49.832 Devices: 00:11:49.832 ID SIZE PATH 00:11:49.832 1 510.00MiB /dev/nvme0n1p1 00:11:49.832 00:11:49.832 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:49.832 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:50.396 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:50.396 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:50.396 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:50.396 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:50.396 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:50.396 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:50.396 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1560427 00:11:50.396 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:50.396 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:50.397 00:11:50.397 real 0m0.877s 00:11:50.397 user 0m0.025s 00:11:50.397 sys 0m0.122s 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:50.397 ************************************ 00:11:50.397 END TEST filesystem_in_capsule_btrfs 00:11:50.397 ************************************ 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.397 ************************************ 00:11:50.397 START TEST filesystem_in_capsule_xfs 00:11:50.397 ************************************ 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:50.397 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:50.654 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:50.654 = sectsz=512 attr=2, projid32bit=1 00:11:50.654 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:50.654 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:50.654 data = bsize=4096 blocks=130560, imaxpct=25 00:11:50.654 = sunit=0 swidth=0 blks 00:11:50.654 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:50.654 log =internal log bsize=4096 blocks=16384, version=2 00:11:50.654 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:50.654 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:51.584 Discarding blocks...Done. 00:11:51.584 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:51.584 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1560427 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:54.108 00:11:54.108 real 0m3.252s 00:11:54.108 user 0m0.016s 00:11:54.108 sys 0m0.055s 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:54.108 ************************************ 00:11:54.108 END TEST filesystem_in_capsule_xfs 00:11:54.108 ************************************ 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1560427 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1560427 ']' 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1560427 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:54.108 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1560427 00:11:54.365 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:54.365 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:54.365 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1560427' 00:11:54.365 killing process with pid 1560427 00:11:54.365 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1560427 00:11:54.365 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1560427 00:11:54.622 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:54.622 00:11:54.622 real 0m12.123s 00:11:54.622 user 0m46.513s 00:11:54.622 sys 0m1.783s 00:11:54.622 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.622 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.622 ************************************ 00:11:54.622 END TEST nvmf_filesystem_in_capsule 00:11:54.622 ************************************ 00:11:54.622 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:54.622 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:54.622 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:54.622 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:54.622 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:54.622 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.622 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:54.622 rmmod nvme_tcp 00:11:54.622 rmmod nvme_fabrics 00:11:54.622 rmmod nvme_keyring 00:11:54.879 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.879 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:54.879 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:54.879 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:54.879 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:54.879 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:54.879 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:54.879 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.879 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:54.879 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.879 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.879 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.779 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:56.779 00:11:56.779 real 0m28.396s 00:11:56.779 user 1m32.227s 00:11:56.779 sys 0m5.241s 00:11:56.779 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.779 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:56.779 ************************************ 00:11:56.779 END TEST nvmf_filesystem 00:11:56.779 ************************************ 00:11:56.779 05:31:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:56.779 05:31:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:56.779 05:31:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.779 05:31:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.779 ************************************ 00:11:56.779 START TEST nvmf_target_discovery 00:11:56.779 ************************************ 00:11:56.779 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:57.037 * Looking for test storage... 00:11:57.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:57.037 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:58.938 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:58.938 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:58.938 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:58.939 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:58.939 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:58.939 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:59.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:11:59.198 00:11:59.198 --- 10.0.0.2 ping statistics --- 00:11:59.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.198 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:11:59.198 00:11:59.198 --- 10.0.0.1 ping statistics --- 00:11:59.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.198 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1564020 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1564020 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1564020 ']' 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:59.198 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.198 [2024-07-25 05:31:52.730833] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:11:59.198 [2024-07-25 05:31:52.730911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.198 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.198 [2024-07-25 05:31:52.796052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.198 [2024-07-25 05:31:52.885079] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.198 [2024-07-25 05:31:52.885139] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.198 [2024-07-25 05:31:52.885152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.198 [2024-07-25 05:31:52.885162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.198 [2024-07-25 05:31:52.885171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.198 [2024-07-25 05:31:52.885256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.198 [2024-07-25 05:31:52.885317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.198 [2024-07-25 05:31:52.885382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.198 [2024-07-25 05:31:52.885385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.456 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:59.456 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 [2024-07-25 05:31:53.038698] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 Null1 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 [2024-07-25 05:31:53.083027] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 Null2 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 Null3 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.457 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.716 Null4 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:59.716 00:11:59.716 Discovery Log Number of Records 6, Generation counter 6 00:11:59.716 =====Discovery Log Entry 0====== 00:11:59.716 trtype: tcp 00:11:59.716 adrfam: ipv4 00:11:59.716 subtype: current discovery subsystem 00:11:59.716 treq: not required 00:11:59.716 portid: 0 00:11:59.716 trsvcid: 4420 00:11:59.716 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:59.716 traddr: 10.0.0.2 00:11:59.716 eflags: explicit discovery connections, duplicate discovery information 00:11:59.716 sectype: none 00:11:59.716 =====Discovery Log Entry 1====== 00:11:59.716 trtype: tcp 00:11:59.716 adrfam: ipv4 00:11:59.716 subtype: nvme subsystem 00:11:59.716 treq: not required 00:11:59.716 portid: 0 00:11:59.716 trsvcid: 4420 00:11:59.716 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:59.716 traddr: 10.0.0.2 00:11:59.716 eflags: none 00:11:59.716 sectype: none 00:11:59.716 =====Discovery Log Entry 2====== 00:11:59.716 trtype: tcp 00:11:59.716 adrfam: ipv4 00:11:59.716 subtype: nvme subsystem 00:11:59.716 treq: not required 00:11:59.716 portid: 0 00:11:59.716 trsvcid: 4420 00:11:59.716 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:59.716 traddr: 10.0.0.2 00:11:59.716 eflags: none 00:11:59.716 sectype: none 00:11:59.716 =====Discovery Log Entry 3====== 00:11:59.716 trtype: tcp 00:11:59.716 adrfam: ipv4 00:11:59.716 subtype: nvme subsystem 00:11:59.716 treq: not required 00:11:59.716 portid: 0 00:11:59.716 trsvcid: 4420 00:11:59.716 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:59.716 traddr: 10.0.0.2 00:11:59.716 eflags: none 00:11:59.716 sectype: none 00:11:59.716 =====Discovery Log Entry 4====== 00:11:59.716 trtype: tcp 00:11:59.716 adrfam: ipv4 00:11:59.716 subtype: nvme subsystem 00:11:59.716 treq: not required 00:11:59.716 portid: 0 00:11:59.716 trsvcid: 4420 00:11:59.716 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:59.716 traddr: 10.0.0.2 00:11:59.716 eflags: none 00:11:59.716 sectype: none 00:11:59.716 =====Discovery Log Entry 5====== 00:11:59.716 trtype: tcp 00:11:59.716 adrfam: ipv4 00:11:59.716 subtype: discovery subsystem referral 00:11:59.716 treq: not required 00:11:59.716 portid: 0 00:11:59.716 trsvcid: 4430 00:11:59.716 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:59.716 traddr: 10.0.0.2 00:11:59.716 eflags: none 00:11:59.716 sectype: none 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:59.716 Perform nvmf subsystem discovery via RPC 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.716 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.716 [ 00:11:59.716 { 00:11:59.716 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:59.716 "subtype": "Discovery", 00:11:59.716 "listen_addresses": [ 00:11:59.716 { 00:11:59.716 "trtype": "TCP", 00:11:59.716 "adrfam": "IPv4", 00:11:59.716 "traddr": "10.0.0.2", 00:11:59.716 "trsvcid": "4420" 00:11:59.716 } 00:11:59.716 ], 00:11:59.716 "allow_any_host": true, 00:11:59.716 "hosts": [] 00:11:59.716 }, 00:11:59.716 { 00:11:59.716 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:59.716 "subtype": "NVMe", 00:11:59.716 "listen_addresses": [ 00:11:59.716 { 00:11:59.716 "trtype": "TCP", 00:11:59.716 "adrfam": "IPv4", 00:11:59.716 "traddr": "10.0.0.2", 00:11:59.716 "trsvcid": "4420" 00:11:59.716 } 00:11:59.716 ], 00:11:59.716 "allow_any_host": true, 00:11:59.716 "hosts": [], 00:11:59.716 "serial_number": "SPDK00000000000001", 00:11:59.716 "model_number": "SPDK bdev Controller", 00:11:59.716 "max_namespaces": 32, 00:11:59.716 "min_cntlid": 1, 00:11:59.716 "max_cntlid": 65519, 00:11:59.717 "namespaces": [ 00:11:59.717 { 00:11:59.717 "nsid": 1, 00:11:59.717 "bdev_name": "Null1", 00:11:59.717 "name": "Null1", 00:11:59.717 "nguid": "7766DB83795F4429ABC2B90EFA8A100D", 00:11:59.717 "uuid": "7766db83-795f-4429-abc2-b90efa8a100d" 00:11:59.717 } 00:11:59.717 ] 00:11:59.717 }, 00:11:59.717 { 00:11:59.717 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:59.717 "subtype": "NVMe", 00:11:59.717 "listen_addresses": [ 00:11:59.717 { 00:11:59.717 "trtype": "TCP", 00:11:59.717 "adrfam": "IPv4", 00:11:59.717 "traddr": "10.0.0.2", 00:11:59.717 "trsvcid": "4420" 00:11:59.717 } 00:11:59.717 ], 00:11:59.717 "allow_any_host": true, 00:11:59.717 "hosts": [], 00:11:59.717 "serial_number": "SPDK00000000000002", 00:11:59.717 "model_number": "SPDK bdev Controller", 00:11:59.717 "max_namespaces": 32, 00:11:59.717 "min_cntlid": 1, 00:11:59.717 "max_cntlid": 65519, 00:11:59.717 "namespaces": [ 00:11:59.717 { 00:11:59.717 "nsid": 1, 00:11:59.717 "bdev_name": "Null2", 00:11:59.717 "name": "Null2", 00:11:59.717 "nguid": "2CBFCEFB7BD140BC81FDF0FC76771D60", 00:11:59.717 "uuid": "2cbfcefb-7bd1-40bc-81fd-f0fc76771d60" 00:11:59.717 } 00:11:59.717 ] 00:11:59.717 }, 00:11:59.717 { 00:11:59.717 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:59.717 "subtype": "NVMe", 00:11:59.717 "listen_addresses": [ 00:11:59.717 { 00:11:59.717 "trtype": "TCP", 00:11:59.717 "adrfam": "IPv4", 00:11:59.717 "traddr": "10.0.0.2", 00:11:59.717 "trsvcid": "4420" 00:11:59.717 } 00:11:59.717 ], 00:11:59.717 "allow_any_host": true, 00:11:59.717 "hosts": [], 00:11:59.717 "serial_number": "SPDK00000000000003", 00:11:59.717 "model_number": "SPDK bdev Controller", 00:11:59.717 "max_namespaces": 32, 00:11:59.717 "min_cntlid": 1, 00:11:59.717 "max_cntlid": 65519, 00:11:59.717 "namespaces": [ 00:11:59.717 { 00:11:59.717 "nsid": 1, 00:11:59.717 "bdev_name": "Null3", 00:11:59.717 "name": "Null3", 00:11:59.717 "nguid": "89239F8E644440DFAB647AB294B01ED7", 00:11:59.717 "uuid": "89239f8e-6444-40df-ab64-7ab294b01ed7" 00:11:59.717 } 00:11:59.717 ] 00:11:59.717 }, 00:11:59.717 { 00:11:59.717 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:59.717 "subtype": "NVMe", 00:11:59.717 "listen_addresses": [ 00:11:59.717 { 00:11:59.717 "trtype": "TCP", 00:11:59.717 "adrfam": "IPv4", 00:11:59.717 "traddr": "10.0.0.2", 00:11:59.717 "trsvcid": "4420" 00:11:59.717 } 00:11:59.717 ], 00:11:59.717 "allow_any_host": true, 00:11:59.717 "hosts": [], 00:11:59.717 "serial_number": "SPDK00000000000004", 00:11:59.717 "model_number": "SPDK bdev Controller", 00:11:59.717 "max_namespaces": 32, 00:11:59.717 "min_cntlid": 1, 00:11:59.717 "max_cntlid": 65519, 00:11:59.717 "namespaces": [ 00:11:59.717 { 00:11:59.717 "nsid": 1, 00:11:59.717 "bdev_name": "Null4", 00:11:59.717 "name": "Null4", 00:11:59.717 "nguid": "55288A8B8A6A4291AC926DE619F8ABE1", 00:11:59.717 "uuid": "55288a8b-8a6a-4291-ac92-6de619f8abe1" 00:11:59.717 } 00:11:59.717 ] 00:11:59.717 } 00:11:59.717 ] 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.717 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.718 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.975 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:59.975 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:59.975 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:59.975 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:59.975 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:59.976 rmmod nvme_tcp 00:11:59.976 rmmod nvme_fabrics 00:11:59.976 rmmod nvme_keyring 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1564020 ']' 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1564020 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1564020 ']' 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1564020 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1564020 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1564020' 00:11:59.976 killing process with pid 1564020 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1564020 00:11:59.976 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1564020 00:12:00.233 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:00.233 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:00.233 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:00.233 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:00.233 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:00.233 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.233 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.233 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.136 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:02.136 00:12:02.136 real 0m5.321s 00:12:02.136 user 0m4.097s 00:12:02.136 sys 0m1.826s 00:12:02.136 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.136 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.136 ************************************ 00:12:02.136 END TEST nvmf_target_discovery 00:12:02.136 ************************************ 00:12:02.136 05:31:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:02.136 05:31:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:02.136 05:31:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.136 05:31:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.136 ************************************ 00:12:02.136 START TEST nvmf_referrals 00:12:02.136 ************************************ 00:12:02.136 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:02.394 * Looking for test storage... 00:12:02.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.394 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:02.395 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:04.296 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:04.296 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:04.296 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.296 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:04.555 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.555 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:04.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:12:04.555 00:12:04.555 --- 10.0.0.2 ping statistics --- 00:12:04.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.555 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:12:04.555 00:12:04.555 --- 10.0.0.1 ping statistics --- 00:12:04.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.555 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:04.555 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:04.556 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:04.556 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:04.556 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1566002 00:12:04.556 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.556 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1566002 00:12:04.556 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1566002 ']' 00:12:04.556 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.556 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:04.556 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.556 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:04.556 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 [2024-07-25 05:31:58.209433] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:12:04.556 [2024-07-25 05:31:58.209519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.556 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.814 [2024-07-25 05:31:58.280010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.814 [2024-07-25 05:31:58.372705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.814 [2024-07-25 05:31:58.372767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.814 [2024-07-25 05:31:58.372794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.814 [2024-07-25 05:31:58.372809] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.814 [2024-07-25 05:31:58.372821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.814 [2024-07-25 05:31:58.372904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.814 [2024-07-25 05:31:58.372961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.814 [2024-07-25 05:31:58.373015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.814 [2024-07-25 05:31:58.373018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.814 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.814 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:04.814 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:04.814 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:04.814 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.072 [2024-07-25 05:31:58.534882] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.072 [2024-07-25 05:31:58.547125] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:05.072 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.073 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:05.073 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:05.331 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:05.331 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:05.331 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:05.331 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:05.331 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.331 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.331 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.331 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:05.331 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.331 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.331 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.589 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:05.847 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:06.105 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:06.105 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:06.105 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:06.105 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:06.105 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:06.105 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.105 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:06.105 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:06.105 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:06.105 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:06.105 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:06.105 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.105 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:06.363 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:06.363 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:06.363 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:06.363 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:06.363 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:06.363 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:06.363 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:06.364 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:06.364 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:06.364 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.364 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.364 rmmod nvme_tcp 00:12:06.364 rmmod nvme_fabrics 00:12:06.622 rmmod nvme_keyring 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1566002 ']' 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1566002 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1566002 ']' 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1566002 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1566002 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1566002' 00:12:06.622 killing process with pid 1566002 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1566002 00:12:06.622 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1566002 00:12:06.880 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:06.880 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:06.880 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:06.880 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.880 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:06.880 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.880 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.880 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.805 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:08.805 00:12:08.805 real 0m6.595s 00:12:08.805 user 0m9.250s 00:12:08.805 sys 0m2.196s 00:12:08.805 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.805 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.805 ************************************ 00:12:08.805 END TEST nvmf_referrals 00:12:08.805 ************************************ 00:12:08.805 05:32:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:08.805 05:32:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:08.805 05:32:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.805 05:32:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.805 ************************************ 00:12:08.805 START TEST nvmf_connect_disconnect 00:12:08.805 ************************************ 00:12:08.805 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:09.063 * Looking for test storage... 00:12:09.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:09.063 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.064 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.064 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.064 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:09.064 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:09.064 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:09.064 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:10.964 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:10.964 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.964 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:10.965 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:10.965 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.965 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.223 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.223 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.223 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:11.223 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.223 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.223 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.223 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:11.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:12:11.223 00:12:11.223 --- 10.0.0.2 ping statistics --- 00:12:11.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.223 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:12:11.223 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:12:11.223 00:12:11.223 --- 10.0.0.1 ping statistics --- 00:12:11.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.224 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1568276 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1568276 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1568276 ']' 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:11.224 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.224 [2024-07-25 05:32:04.811731] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:12:11.224 [2024-07-25 05:32:04.811803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.224 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.224 [2024-07-25 05:32:04.874168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.482 [2024-07-25 05:32:04.963605] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.482 [2024-07-25 05:32:04.963663] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.482 [2024-07-25 05:32:04.963678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.482 [2024-07-25 05:32:04.963691] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.482 [2024-07-25 05:32:04.963707] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.482 [2024-07-25 05:32:04.963815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.482 [2024-07-25 05:32:04.964410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.482 [2024-07-25 05:32:04.964442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.482 [2024-07-25 05:32:04.964446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.482 [2024-07-25 05:32:05.121577] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.482 [2024-07-25 05:32:05.174257] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:11.482 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:14.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.901 rmmod nvme_tcp 00:16:02.901 rmmod nvme_fabrics 00:16:02.901 rmmod nvme_keyring 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1568276 ']' 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1568276 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1568276 ']' 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1568276 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1568276 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1568276' 00:16:02.901 killing process with pid 1568276 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1568276 00:16:02.901 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1568276 00:16:03.159 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:03.159 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:03.159 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:03.159 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.159 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:03.159 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.159 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.159 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.058 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:05.058 00:16:05.058 real 3m56.278s 00:16:05.058 user 14m59.364s 00:16:05.058 sys 0m34.525s 00:16:05.058 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:05.058 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:05.058 ************************************ 00:16:05.058 END TEST nvmf_connect_disconnect 00:16:05.058 ************************************ 00:16:05.058 05:35:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:05.058 05:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:05.058 05:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.058 05:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:05.316 ************************************ 00:16:05.316 START TEST nvmf_multitarget 00:16:05.316 ************************************ 00:16:05.316 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:05.316 * Looking for test storage... 00:16:05.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.316 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.316 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:05.316 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.316 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.316 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:16:05.317 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:07.218 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:07.218 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:07.218 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:07.218 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.218 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:07.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:16:07.218 00:16:07.218 --- 10.0.0.2 ping statistics --- 00:16:07.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.219 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:16:07.219 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:16:07.219 00:16:07.219 --- 10.0.0.1 ping statistics --- 00:16:07.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.219 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:16:07.219 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.219 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:16:07.219 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:07.219 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.219 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:07.219 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:07.219 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.219 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:07.219 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:07.477 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:07.477 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:07.477 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:07.477 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:07.477 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1599435 00:16:07.477 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:07.477 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1599435 00:16:07.477 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1599435 ']' 00:16:07.477 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.477 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.477 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.477 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.477 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:07.477 [2024-07-25 05:36:00.976410] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:16:07.477 [2024-07-25 05:36:00.976500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.477 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.477 [2024-07-25 05:36:01.043151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.477 [2024-07-25 05:36:01.138254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.477 [2024-07-25 05:36:01.138306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.477 [2024-07-25 05:36:01.138323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.477 [2024-07-25 05:36:01.138337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.477 [2024-07-25 05:36:01.138348] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.477 [2024-07-25 05:36:01.138404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.477 [2024-07-25 05:36:01.138459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.477 [2024-07-25 05:36:01.138510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.477 [2024-07-25 05:36:01.138512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.735 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.735 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:07.735 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:07.735 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:07.735 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:07.735 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.735 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:07.735 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:07.735 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:07.735 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:07.735 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:07.992 "nvmf_tgt_1" 00:16:07.992 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:07.992 "nvmf_tgt_2" 00:16:07.992 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:07.992 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:08.249 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:08.249 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:08.249 true 00:16:08.249 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:08.508 true 00:16:08.508 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:08.508 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:08.508 rmmod nvme_tcp 00:16:08.508 rmmod nvme_fabrics 00:16:08.508 rmmod nvme_keyring 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1599435 ']' 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1599435 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1599435 ']' 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1599435 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1599435 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1599435' 00:16:08.508 killing process with pid 1599435 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1599435 00:16:08.508 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1599435 00:16:08.766 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:08.766 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:08.766 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:08.766 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:08.766 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:08.766 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.766 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.766 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:11.296 00:16:11.296 real 0m5.645s 00:16:11.296 user 0m6.386s 00:16:11.296 sys 0m1.881s 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:11.296 ************************************ 00:16:11.296 END TEST nvmf_multitarget 00:16:11.296 ************************************ 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:11.296 ************************************ 00:16:11.296 START TEST nvmf_rpc 00:16:11.296 ************************************ 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:11.296 * Looking for test storage... 00:16:11.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.296 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:16:11.297 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:13.194 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:13.194 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:13.194 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:13.194 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:13.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:16:13.194 00:16:13.194 --- 10.0.0.2 ping statistics --- 00:16:13.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.194 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:16:13.194 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:13.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:16:13.194 00:16:13.194 --- 10.0.0.1 ping statistics --- 00:16:13.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.194 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1601588 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1601588 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1601588 ']' 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:13.195 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.195 [2024-07-25 05:36:06.788253] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:16:13.195 [2024-07-25 05:36:06.788340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.195 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.195 [2024-07-25 05:36:06.862282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:13.452 [2024-07-25 05:36:06.956954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.452 [2024-07-25 05:36:06.957017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.452 [2024-07-25 05:36:06.957033] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.452 [2024-07-25 05:36:06.957046] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.452 [2024-07-25 05:36:06.957057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.452 [2024-07-25 05:36:06.957113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.452 [2024-07-25 05:36:06.957168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.452 [2024-07-25 05:36:06.957219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.452 [2024-07-25 05:36:06.957222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.452 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:13.452 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:13.452 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:13.452 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:13.452 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.452 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.452 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:13.452 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.452 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.452 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.452 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:13.452 "tick_rate": 2700000000, 00:16:13.452 "poll_groups": [ 00:16:13.452 { 00:16:13.452 "name": "nvmf_tgt_poll_group_000", 00:16:13.452 "admin_qpairs": 0, 00:16:13.452 "io_qpairs": 0, 00:16:13.452 "current_admin_qpairs": 0, 00:16:13.452 "current_io_qpairs": 0, 00:16:13.452 "pending_bdev_io": 0, 00:16:13.452 "completed_nvme_io": 0, 00:16:13.452 "transports": [] 00:16:13.452 }, 00:16:13.452 { 00:16:13.452 "name": "nvmf_tgt_poll_group_001", 00:16:13.452 "admin_qpairs": 0, 00:16:13.452 "io_qpairs": 0, 00:16:13.452 "current_admin_qpairs": 0, 00:16:13.452 "current_io_qpairs": 0, 00:16:13.452 "pending_bdev_io": 0, 00:16:13.452 "completed_nvme_io": 0, 00:16:13.452 "transports": [] 00:16:13.452 }, 00:16:13.452 { 00:16:13.452 "name": "nvmf_tgt_poll_group_002", 00:16:13.452 "admin_qpairs": 0, 00:16:13.452 "io_qpairs": 0, 00:16:13.452 "current_admin_qpairs": 0, 00:16:13.452 "current_io_qpairs": 0, 00:16:13.452 "pending_bdev_io": 0, 00:16:13.452 "completed_nvme_io": 0, 00:16:13.452 "transports": [] 00:16:13.452 }, 00:16:13.452 { 00:16:13.452 "name": "nvmf_tgt_poll_group_003", 00:16:13.452 "admin_qpairs": 0, 00:16:13.452 "io_qpairs": 0, 00:16:13.452 "current_admin_qpairs": 0, 00:16:13.452 "current_io_qpairs": 0, 00:16:13.452 "pending_bdev_io": 0, 00:16:13.452 "completed_nvme_io": 0, 00:16:13.452 "transports": [] 00:16:13.453 } 00:16:13.453 ] 00:16:13.453 }' 00:16:13.453 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:13.453 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:13.453 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:13.453 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.712 [2024-07-25 05:36:07.216120] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:13.712 "tick_rate": 2700000000, 00:16:13.712 "poll_groups": [ 00:16:13.712 { 00:16:13.712 "name": "nvmf_tgt_poll_group_000", 00:16:13.712 "admin_qpairs": 0, 00:16:13.712 "io_qpairs": 0, 00:16:13.712 "current_admin_qpairs": 0, 00:16:13.712 "current_io_qpairs": 0, 00:16:13.712 "pending_bdev_io": 0, 00:16:13.712 "completed_nvme_io": 0, 00:16:13.712 "transports": [ 00:16:13.712 { 00:16:13.712 "trtype": "TCP" 00:16:13.712 } 00:16:13.712 ] 00:16:13.712 }, 00:16:13.712 { 00:16:13.712 "name": "nvmf_tgt_poll_group_001", 00:16:13.712 "admin_qpairs": 0, 00:16:13.712 "io_qpairs": 0, 00:16:13.712 "current_admin_qpairs": 0, 00:16:13.712 "current_io_qpairs": 0, 00:16:13.712 "pending_bdev_io": 0, 00:16:13.712 "completed_nvme_io": 0, 00:16:13.712 "transports": [ 00:16:13.712 { 00:16:13.712 "trtype": "TCP" 00:16:13.712 } 00:16:13.712 ] 00:16:13.712 }, 00:16:13.712 { 00:16:13.712 "name": "nvmf_tgt_poll_group_002", 00:16:13.712 "admin_qpairs": 0, 00:16:13.712 "io_qpairs": 0, 00:16:13.712 "current_admin_qpairs": 0, 00:16:13.712 "current_io_qpairs": 0, 00:16:13.712 "pending_bdev_io": 0, 00:16:13.712 "completed_nvme_io": 0, 00:16:13.712 "transports": [ 00:16:13.712 { 00:16:13.712 "trtype": "TCP" 00:16:13.712 } 00:16:13.712 ] 00:16:13.712 }, 00:16:13.712 { 00:16:13.712 "name": "nvmf_tgt_poll_group_003", 00:16:13.712 "admin_qpairs": 0, 00:16:13.712 "io_qpairs": 0, 00:16:13.712 "current_admin_qpairs": 0, 00:16:13.712 "current_io_qpairs": 0, 00:16:13.712 "pending_bdev_io": 0, 00:16:13.712 "completed_nvme_io": 0, 00:16:13.712 "transports": [ 00:16:13.712 { 00:16:13.712 "trtype": "TCP" 00:16:13.712 } 00:16:13.712 ] 00:16:13.712 } 00:16:13.712 ] 00:16:13.712 }' 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:13.712 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.713 Malloc1 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.713 [2024-07-25 05:36:07.364030] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:13.713 [2024-07-25 05:36:07.386515] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:13.713 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:13.713 could not add new controller: failed to write to nvme-fabrics device 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:13.713 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:13.975 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.975 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.975 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.975 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.975 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:14.539 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:14.539 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:14.539 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:14.539 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:14.539 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:16.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:16.434 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:16.434 [2024-07-25 05:36:10.135128] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:16.691 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:16.691 could not add new controller: failed to write to nvme-fabrics device 00:16:16.691 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:16.691 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:16.691 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:16.691 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:16.691 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:16.691 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.691 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.691 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.691 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.256 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:17.256 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:17.256 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.256 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:17.256 05:36:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.151 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.408 [2024-07-25 05:36:12.871823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.408 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:19.972 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:19.972 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:19.972 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.972 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:19.972 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:21.867 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:21.867 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:21.867 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:21.867 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:21.867 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.867 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:21.867 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.125 [2024-07-25 05:36:15.625585] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.125 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:22.691 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:22.691 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:22.691 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.691 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:22.691 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:25.217 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:25.217 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:25.217 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.217 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:25.217 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.217 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:25.217 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.218 [2024-07-25 05:36:18.443728] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.218 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.475 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:25.475 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:25.475 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:25.475 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:25.475 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:28.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.000 [2024-07-25 05:36:21.267156] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.000 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:28.566 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:28.566 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:28.566 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.566 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:28.566 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:30.460 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:30.460 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:30.460 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:30.460 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:30.460 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.460 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:30.460 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:30.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.460 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.461 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.461 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.461 [2024-07-25 05:36:24.126068] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.461 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.461 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:30.461 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.461 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.461 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.461 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:30.461 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.461 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.461 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.461 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:31.390 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:31.390 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:31.390 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:31.390 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:31.391 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:33.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 [2024-07-25 05:36:26.900101] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 [2024-07-25 05:36:26.948139] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.293 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.579 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.579 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.579 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 [2024-07-25 05:36:26.996323] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.579 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.579 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.579 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.579 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 [2024-07-25 05:36:27.044470] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.580 [2024-07-25 05:36:27.092663] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:33.580 "tick_rate": 2700000000, 00:16:33.580 "poll_groups": [ 00:16:33.580 { 00:16:33.580 "name": "nvmf_tgt_poll_group_000", 00:16:33.580 "admin_qpairs": 2, 00:16:33.580 "io_qpairs": 84, 00:16:33.580 "current_admin_qpairs": 0, 00:16:33.580 "current_io_qpairs": 0, 00:16:33.580 "pending_bdev_io": 0, 00:16:33.580 "completed_nvme_io": 211, 00:16:33.580 "transports": [ 00:16:33.580 { 00:16:33.580 "trtype": "TCP" 00:16:33.580 } 00:16:33.580 ] 00:16:33.580 }, 00:16:33.580 { 00:16:33.580 "name": "nvmf_tgt_poll_group_001", 00:16:33.580 "admin_qpairs": 2, 00:16:33.580 "io_qpairs": 84, 00:16:33.580 "current_admin_qpairs": 0, 00:16:33.580 "current_io_qpairs": 0, 00:16:33.580 "pending_bdev_io": 0, 00:16:33.580 "completed_nvme_io": 175, 00:16:33.580 "transports": [ 00:16:33.580 { 00:16:33.580 "trtype": "TCP" 00:16:33.580 } 00:16:33.580 ] 00:16:33.580 }, 00:16:33.580 { 00:16:33.580 "name": "nvmf_tgt_poll_group_002", 00:16:33.580 "admin_qpairs": 1, 00:16:33.580 "io_qpairs": 84, 00:16:33.580 "current_admin_qpairs": 0, 00:16:33.580 "current_io_qpairs": 0, 00:16:33.580 "pending_bdev_io": 0, 00:16:33.580 "completed_nvme_io": 118, 00:16:33.580 "transports": [ 00:16:33.580 { 00:16:33.580 "trtype": "TCP" 00:16:33.580 } 00:16:33.580 ] 00:16:33.580 }, 00:16:33.580 { 00:16:33.580 "name": "nvmf_tgt_poll_group_003", 00:16:33.580 "admin_qpairs": 2, 00:16:33.580 "io_qpairs": 84, 00:16:33.580 "current_admin_qpairs": 0, 00:16:33.580 "current_io_qpairs": 0, 00:16:33.580 "pending_bdev_io": 0, 00:16:33.580 "completed_nvme_io": 182, 00:16:33.580 "transports": [ 00:16:33.580 { 00:16:33.580 "trtype": "TCP" 00:16:33.580 } 00:16:33.580 ] 00:16:33.580 } 00:16:33.580 ] 00:16:33.580 }' 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:33.580 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:33.580 rmmod nvme_tcp 00:16:33.580 rmmod nvme_fabrics 00:16:33.580 rmmod nvme_keyring 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1601588 ']' 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1601588 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1601588 ']' 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1601588 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1601588 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1601588' 00:16:33.838 killing process with pid 1601588 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1601588 00:16:33.838 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1601588 00:16:34.097 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:34.097 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:34.097 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:34.097 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.097 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:34.097 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.097 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.097 05:36:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.999 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:35.999 00:16:35.999 real 0m25.146s 00:16:35.999 user 1m21.631s 00:16:35.999 sys 0m4.129s 00:16:35.999 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.999 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.999 ************************************ 00:16:35.999 END TEST nvmf_rpc 00:16:35.999 ************************************ 00:16:35.999 05:36:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:35.999 05:36:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:35.999 05:36:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.999 05:36:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:35.999 ************************************ 00:16:35.999 START TEST nvmf_invalid 00:16:35.999 ************************************ 00:16:35.999 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:36.257 * Looking for test storage... 00:16:36.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:16:36.257 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:38.157 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:38.158 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:38.158 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:38.158 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:38.158 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:38.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:16:38.158 00:16:38.158 --- 10.0.0.2 ping statistics --- 00:16:38.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.158 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:38.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:16:38.158 00:16:38.158 --- 10.0.0.1 ping statistics --- 00:16:38.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.158 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:38.158 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:38.416 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:38.416 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:38.416 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:38.416 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:38.416 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1606575 00:16:38.416 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1606575 00:16:38.416 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1606575 ']' 00:16:38.416 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.416 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.416 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:38.416 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.416 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.416 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:38.416 [2024-07-25 05:36:31.931216] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:16:38.416 [2024-07-25 05:36:31.931325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.416 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.416 [2024-07-25 05:36:32.000412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:38.416 [2024-07-25 05:36:32.090941] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.416 [2024-07-25 05:36:32.091007] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.416 [2024-07-25 05:36:32.091021] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.416 [2024-07-25 05:36:32.091032] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.416 [2024-07-25 05:36:32.091042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.416 [2024-07-25 05:36:32.091505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.416 [2024-07-25 05:36:32.091551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.417 [2024-07-25 05:36:32.091589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:38.417 [2024-07-25 05:36:32.091593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.675 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:38.675 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:16:38.675 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:38.675 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:38.675 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:38.675 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.675 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:38.675 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5044 00:16:38.933 [2024-07-25 05:36:32.483439] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:38.933 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:38.933 { 00:16:38.933 "nqn": "nqn.2016-06.io.spdk:cnode5044", 00:16:38.933 "tgt_name": "foobar", 00:16:38.933 "method": "nvmf_create_subsystem", 00:16:38.933 "req_id": 1 00:16:38.933 } 00:16:38.933 Got JSON-RPC error response 00:16:38.933 response: 00:16:38.933 { 00:16:38.933 "code": -32603, 00:16:38.933 "message": "Unable to find target foobar" 00:16:38.933 }' 00:16:38.933 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:38.933 { 00:16:38.933 "nqn": "nqn.2016-06.io.spdk:cnode5044", 00:16:38.933 "tgt_name": "foobar", 00:16:38.933 "method": "nvmf_create_subsystem", 00:16:38.933 "req_id": 1 00:16:38.933 } 00:16:38.933 Got JSON-RPC error response 00:16:38.933 response: 00:16:38.933 { 00:16:38.933 "code": -32603, 00:16:38.933 "message": "Unable to find target foobar" 00:16:38.933 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:38.933 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:38.933 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18496 00:16:39.190 [2024-07-25 05:36:32.732264] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18496: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:39.190 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:39.190 { 00:16:39.190 "nqn": "nqn.2016-06.io.spdk:cnode18496", 00:16:39.190 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:39.190 "method": "nvmf_create_subsystem", 00:16:39.190 "req_id": 1 00:16:39.190 } 00:16:39.190 Got JSON-RPC error response 00:16:39.190 response: 00:16:39.190 { 00:16:39.190 "code": -32602, 00:16:39.190 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:39.190 }' 00:16:39.190 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:39.190 { 00:16:39.190 "nqn": "nqn.2016-06.io.spdk:cnode18496", 00:16:39.190 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:39.190 "method": "nvmf_create_subsystem", 00:16:39.190 "req_id": 1 00:16:39.190 } 00:16:39.190 Got JSON-RPC error response 00:16:39.190 response: 00:16:39.190 { 00:16:39.190 "code": -32602, 00:16:39.190 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:39.190 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:39.190 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:39.190 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14959 00:16:39.448 [2024-07-25 05:36:32.977091] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14959: invalid model number 'SPDK_Controller' 00:16:39.448 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:39.448 { 00:16:39.448 "nqn": "nqn.2016-06.io.spdk:cnode14959", 00:16:39.448 "model_number": "SPDK_Controller\u001f", 00:16:39.448 "method": "nvmf_create_subsystem", 00:16:39.448 "req_id": 1 00:16:39.448 } 00:16:39.448 Got JSON-RPC error response 00:16:39.448 response: 00:16:39.448 { 00:16:39.448 "code": -32602, 00:16:39.448 "message": "Invalid MN SPDK_Controller\u001f" 00:16:39.448 }' 00:16:39.448 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:39.448 { 00:16:39.448 "nqn": "nqn.2016-06.io.spdk:cnode14959", 00:16:39.448 "model_number": "SPDK_Controller\u001f", 00:16:39.448 "method": "nvmf_create_subsystem", 00:16:39.448 "req_id": 1 00:16:39.448 } 00:16:39.448 Got JSON-RPC error response 00:16:39.448 response: 00:16:39.448 { 00:16:39.448 "code": -32602, 00:16:39.448 "message": "Invalid MN SPDK_Controller\u001f" 00:16:39.448 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:39.448 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:39.448 05:36:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.448 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 7 == \- ]] 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '781Vc~ef9|O1c0?G9A_ed' 00:16:39.449 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '781Vc~ef9|O1c0?G9A_ed' nqn.2016-06.io.spdk:cnode19678 00:16:39.726 [2024-07-25 05:36:33.322281] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19678: invalid serial number '781Vc~ef9|O1c0?G9A_ed' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:39.726 { 00:16:39.726 "nqn": "nqn.2016-06.io.spdk:cnode19678", 00:16:39.726 "serial_number": "781Vc~ef9|O1c0?G9A_ed", 00:16:39.726 "method": "nvmf_create_subsystem", 00:16:39.726 "req_id": 1 00:16:39.726 } 00:16:39.726 Got JSON-RPC error response 00:16:39.726 response: 00:16:39.726 { 00:16:39.726 "code": -32602, 00:16:39.726 "message": "Invalid SN 781Vc~ef9|O1c0?G9A_ed" 00:16:39.726 }' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:39.726 { 00:16:39.726 "nqn": "nqn.2016-06.io.spdk:cnode19678", 00:16:39.726 "serial_number": "781Vc~ef9|O1c0?G9A_ed", 00:16:39.726 "method": "nvmf_create_subsystem", 00:16:39.726 "req_id": 1 00:16:39.726 } 00:16:39.726 Got JSON-RPC error response 00:16:39.726 response: 00:16:39.726 { 00:16:39.726 "code": -32602, 00:16:39.726 "message": "Invalid SN 781Vc~ef9|O1c0?G9A_ed" 00:16:39.726 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.726 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.727 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:39.984 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ w == \- ]] 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'wkM&)r^IAE53w]O/8]!~Sqb7pEs;@?2X}`aV!o/#|' 00:16:39.985 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'wkM&)r^IAE53w]O/8]!~Sqb7pEs;@?2X}`aV!o/#|' nqn.2016-06.io.spdk:cnode24492 00:16:40.242 [2024-07-25 05:36:33.719579] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24492: invalid model number 'wkM&)r^IAE53w]O/8]!~Sqb7pEs;@?2X}`aV!o/#|' 00:16:40.242 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:40.242 { 00:16:40.242 "nqn": "nqn.2016-06.io.spdk:cnode24492", 00:16:40.242 "model_number": "wkM&)r^IAE53w]O/8]!~Sqb7pEs;@?2X}`aV!o/#|", 00:16:40.242 "method": "nvmf_create_subsystem", 00:16:40.242 "req_id": 1 00:16:40.242 } 00:16:40.242 Got JSON-RPC error response 00:16:40.242 response: 00:16:40.242 { 00:16:40.242 "code": -32602, 00:16:40.242 "message": "Invalid MN wkM&)r^IAE53w]O/8]!~Sqb7pEs;@?2X}`aV!o/#|" 00:16:40.242 }' 00:16:40.242 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:40.242 { 00:16:40.242 "nqn": "nqn.2016-06.io.spdk:cnode24492", 00:16:40.242 "model_number": "wkM&)r^IAE53w]O/8]!~Sqb7pEs;@?2X}`aV!o/#|", 00:16:40.242 "method": "nvmf_create_subsystem", 00:16:40.242 "req_id": 1 00:16:40.242 } 00:16:40.242 Got JSON-RPC error response 00:16:40.242 response: 00:16:40.242 { 00:16:40.242 "code": -32602, 00:16:40.242 "message": "Invalid MN wkM&)r^IAE53w]O/8]!~Sqb7pEs;@?2X}`aV!o/#|" 00:16:40.242 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:40.242 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:40.500 [2024-07-25 05:36:33.972484] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.500 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:40.757 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:40.757 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:40.757 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:40.757 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:40.757 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:41.015 [2024-07-25 05:36:34.486141] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:41.015 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:41.015 { 00:16:41.015 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:41.015 "listen_address": { 00:16:41.015 "trtype": "tcp", 00:16:41.015 "traddr": "", 00:16:41.015 "trsvcid": "4421" 00:16:41.015 }, 00:16:41.015 "method": "nvmf_subsystem_remove_listener", 00:16:41.015 "req_id": 1 00:16:41.015 } 00:16:41.015 Got JSON-RPC error response 00:16:41.015 response: 00:16:41.015 { 00:16:41.015 "code": -32602, 00:16:41.015 "message": "Invalid parameters" 00:16:41.015 }' 00:16:41.015 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:41.015 { 00:16:41.015 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:41.015 "listen_address": { 00:16:41.015 "trtype": "tcp", 00:16:41.015 "traddr": "", 00:16:41.015 "trsvcid": "4421" 00:16:41.015 }, 00:16:41.015 "method": "nvmf_subsystem_remove_listener", 00:16:41.015 "req_id": 1 00:16:41.015 } 00:16:41.015 Got JSON-RPC error response 00:16:41.015 response: 00:16:41.015 { 00:16:41.015 "code": -32602, 00:16:41.015 "message": "Invalid parameters" 00:16:41.015 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:41.015 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12062 -i 0 00:16:41.273 [2024-07-25 05:36:34.730920] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12062: invalid cntlid range [0-65519] 00:16:41.273 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:41.273 { 00:16:41.273 "nqn": "nqn.2016-06.io.spdk:cnode12062", 00:16:41.273 "min_cntlid": 0, 00:16:41.273 "method": "nvmf_create_subsystem", 00:16:41.273 "req_id": 1 00:16:41.273 } 00:16:41.273 Got JSON-RPC error response 00:16:41.273 response: 00:16:41.273 { 00:16:41.273 "code": -32602, 00:16:41.273 "message": "Invalid cntlid range [0-65519]" 00:16:41.273 }' 00:16:41.273 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:41.273 { 00:16:41.273 "nqn": "nqn.2016-06.io.spdk:cnode12062", 00:16:41.273 "min_cntlid": 0, 00:16:41.273 "method": "nvmf_create_subsystem", 00:16:41.273 "req_id": 1 00:16:41.273 } 00:16:41.273 Got JSON-RPC error response 00:16:41.273 response: 00:16:41.273 { 00:16:41.273 "code": -32602, 00:16:41.273 "message": "Invalid cntlid range [0-65519]" 00:16:41.273 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:41.273 05:36:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9961 -i 65520 00:16:41.531 [2024-07-25 05:36:34.983752] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9961: invalid cntlid range [65520-65519] 00:16:41.531 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:41.531 { 00:16:41.531 "nqn": "nqn.2016-06.io.spdk:cnode9961", 00:16:41.531 "min_cntlid": 65520, 00:16:41.531 "method": "nvmf_create_subsystem", 00:16:41.531 "req_id": 1 00:16:41.531 } 00:16:41.531 Got JSON-RPC error response 00:16:41.531 response: 00:16:41.531 { 00:16:41.531 "code": -32602, 00:16:41.531 "message": "Invalid cntlid range [65520-65519]" 00:16:41.531 }' 00:16:41.531 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:41.531 { 00:16:41.531 "nqn": "nqn.2016-06.io.spdk:cnode9961", 00:16:41.531 "min_cntlid": 65520, 00:16:41.531 "method": "nvmf_create_subsystem", 00:16:41.531 "req_id": 1 00:16:41.531 } 00:16:41.531 Got JSON-RPC error response 00:16:41.531 response: 00:16:41.531 { 00:16:41.531 "code": -32602, 00:16:41.531 "message": "Invalid cntlid range [65520-65519]" 00:16:41.531 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:41.531 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27529 -I 0 00:16:41.531 [2024-07-25 05:36:35.224568] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27529: invalid cntlid range [1-0] 00:16:41.789 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:41.789 { 00:16:41.789 "nqn": "nqn.2016-06.io.spdk:cnode27529", 00:16:41.789 "max_cntlid": 0, 00:16:41.789 "method": "nvmf_create_subsystem", 00:16:41.789 "req_id": 1 00:16:41.789 } 00:16:41.789 Got JSON-RPC error response 00:16:41.789 response: 00:16:41.789 { 00:16:41.789 "code": -32602, 00:16:41.789 "message": "Invalid cntlid range [1-0]" 00:16:41.789 }' 00:16:41.789 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:41.789 { 00:16:41.789 "nqn": "nqn.2016-06.io.spdk:cnode27529", 00:16:41.789 "max_cntlid": 0, 00:16:41.789 "method": "nvmf_create_subsystem", 00:16:41.789 "req_id": 1 00:16:41.789 } 00:16:41.789 Got JSON-RPC error response 00:16:41.789 response: 00:16:41.789 { 00:16:41.789 "code": -32602, 00:16:41.789 "message": "Invalid cntlid range [1-0]" 00:16:41.789 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:41.789 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21135 -I 65520 00:16:42.047 [2024-07-25 05:36:35.505462] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21135: invalid cntlid range [1-65520] 00:16:42.047 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:42.047 { 00:16:42.047 "nqn": "nqn.2016-06.io.spdk:cnode21135", 00:16:42.047 "max_cntlid": 65520, 00:16:42.047 "method": "nvmf_create_subsystem", 00:16:42.047 "req_id": 1 00:16:42.047 } 00:16:42.047 Got JSON-RPC error response 00:16:42.047 response: 00:16:42.047 { 00:16:42.047 "code": -32602, 00:16:42.047 "message": "Invalid cntlid range [1-65520]" 00:16:42.047 }' 00:16:42.047 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:42.047 { 00:16:42.047 "nqn": "nqn.2016-06.io.spdk:cnode21135", 00:16:42.047 "max_cntlid": 65520, 00:16:42.047 "method": "nvmf_create_subsystem", 00:16:42.047 "req_id": 1 00:16:42.047 } 00:16:42.047 Got JSON-RPC error response 00:16:42.047 response: 00:16:42.047 { 00:16:42.047 "code": -32602, 00:16:42.047 "message": "Invalid cntlid range [1-65520]" 00:16:42.047 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:42.047 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5752 -i 6 -I 5 00:16:42.047 [2024-07-25 05:36:35.746299] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5752: invalid cntlid range [6-5] 00:16:42.305 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:42.305 { 00:16:42.305 "nqn": "nqn.2016-06.io.spdk:cnode5752", 00:16:42.305 "min_cntlid": 6, 00:16:42.305 "max_cntlid": 5, 00:16:42.305 "method": "nvmf_create_subsystem", 00:16:42.305 "req_id": 1 00:16:42.305 } 00:16:42.305 Got JSON-RPC error response 00:16:42.305 response: 00:16:42.305 { 00:16:42.305 "code": -32602, 00:16:42.305 "message": "Invalid cntlid range [6-5]" 00:16:42.305 }' 00:16:42.305 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:42.305 { 00:16:42.305 "nqn": "nqn.2016-06.io.spdk:cnode5752", 00:16:42.305 "min_cntlid": 6, 00:16:42.305 "max_cntlid": 5, 00:16:42.305 "method": "nvmf_create_subsystem", 00:16:42.305 "req_id": 1 00:16:42.305 } 00:16:42.305 Got JSON-RPC error response 00:16:42.305 response: 00:16:42.305 { 00:16:42.305 "code": -32602, 00:16:42.305 "message": "Invalid cntlid range [6-5]" 00:16:42.305 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:42.305 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:42.306 { 00:16:42.306 "name": "foobar", 00:16:42.306 "method": "nvmf_delete_target", 00:16:42.306 "req_id": 1 00:16:42.306 } 00:16:42.306 Got JSON-RPC error response 00:16:42.306 response: 00:16:42.306 { 00:16:42.306 "code": -32602, 00:16:42.306 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:42.306 }' 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:42.306 { 00:16:42.306 "name": "foobar", 00:16:42.306 "method": "nvmf_delete_target", 00:16:42.306 "req_id": 1 00:16:42.306 } 00:16:42.306 Got JSON-RPC error response 00:16:42.306 response: 00:16:42.306 { 00:16:42.306 "code": -32602, 00:16:42.306 "message": "The specified target doesn't exist, cannot delete it." 00:16:42.306 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.306 rmmod nvme_tcp 00:16:42.306 rmmod nvme_fabrics 00:16:42.306 rmmod nvme_keyring 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1606575 ']' 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1606575 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1606575 ']' 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1606575 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1606575 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1606575' 00:16:42.306 killing process with pid 1606575 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1606575 00:16:42.306 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1606575 00:16:42.564 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:42.564 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:42.564 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:42.564 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.564 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:42.564 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.564 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:42.564 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.097 00:16:45.097 real 0m8.559s 00:16:45.097 user 0m19.919s 00:16:45.097 sys 0m2.429s 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:45.097 ************************************ 00:16:45.097 END TEST nvmf_invalid 00:16:45.097 ************************************ 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:45.097 ************************************ 00:16:45.097 START TEST nvmf_connect_stress 00:16:45.097 ************************************ 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:45.097 * Looking for test storage... 00:16:45.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.097 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:47.000 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:47.000 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.000 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:47.001 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:47.001 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:47.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:16:47.001 00:16:47.001 --- 10.0.0.2 ping statistics --- 00:16:47.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.001 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:47.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:16:47.001 00:16:47.001 --- 10.0.0.1 ping statistics --- 00:16:47.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.001 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1609204 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1609204 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1609204 ']' 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.001 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.001 [2024-07-25 05:36:40.586466] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:16:47.001 [2024-07-25 05:36:40.586547] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.001 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.001 [2024-07-25 05:36:40.655041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:47.260 [2024-07-25 05:36:40.752205] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.260 [2024-07-25 05:36:40.752263] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.260 [2024-07-25 05:36:40.752283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.260 [2024-07-25 05:36:40.752297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.260 [2024-07-25 05:36:40.752309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.260 [2024-07-25 05:36:40.752369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.260 [2024-07-25 05:36:40.752429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.260 [2024-07-25 05:36:40.752426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.260 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.260 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:16:47.260 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:47.260 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:47.260 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.260 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.260 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.261 [2024-07-25 05:36:40.900347] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.261 [2024-07-25 05:36:40.929566] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.261 NULL1 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1609232 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.261 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.519 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.777 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.777 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:47.777 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.777 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.777 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.035 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.035 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:48.035 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.035 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.035 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.293 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.293 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:48.293 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.294 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.294 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.859 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.859 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:48.859 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.859 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.859 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.117 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.117 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:49.117 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.117 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.117 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.375 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.375 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:49.375 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.375 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.375 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.659 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.659 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:49.659 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.659 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.659 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.948 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.948 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:49.948 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.948 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.948 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.205 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.205 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:50.205 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.205 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.205 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.770 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.770 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:50.770 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.770 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.770 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.028 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.028 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:51.028 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.028 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.028 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.286 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.286 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:51.286 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.286 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.286 05:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.544 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.544 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:51.544 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.544 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.544 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.802 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.802 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:51.802 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.802 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.802 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.368 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.368 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:52.368 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.368 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.368 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.626 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.626 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:52.626 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.626 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.626 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.884 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.884 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:52.884 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.884 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.884 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.142 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.142 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:53.142 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.142 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.142 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.400 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.400 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:53.400 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.400 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.400 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.965 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.965 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:53.965 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.965 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.965 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.223 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.223 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:54.223 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.223 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.223 05:36:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.481 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.481 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:54.481 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.481 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.481 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.739 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.739 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:54.739 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.739 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.739 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.305 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.305 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:55.305 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.305 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.305 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.563 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.563 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:55.563 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.563 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.563 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.820 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.821 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:55.821 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.821 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.821 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.078 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.078 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:56.078 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.078 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.078 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.337 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.337 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:56.337 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.337 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.337 05:36:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.902 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.902 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:56.902 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.902 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.902 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.160 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.160 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:57.160 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.160 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.160 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.416 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.416 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:57.416 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.416 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.416 05:36:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.673 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1609232 00:16:57.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1609232) - No such process 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1609232 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:57.673 rmmod nvme_tcp 00:16:57.673 rmmod nvme_fabrics 00:16:57.673 rmmod nvme_keyring 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1609204 ']' 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1609204 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1609204 ']' 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1609204 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1609204 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1609204' 00:16:57.673 killing process with pid 1609204 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1609204 00:16:57.673 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1609204 00:16:57.930 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:57.930 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:57.930 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:57.930 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:57.930 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:57.930 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.930 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.930 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:00.456 00:17:00.456 real 0m15.356s 00:17:00.456 user 0m38.242s 00:17:00.456 sys 0m6.033s 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.456 ************************************ 00:17:00.456 END TEST nvmf_connect_stress 00:17:00.456 ************************************ 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.456 ************************************ 00:17:00.456 START TEST nvmf_fused_ordering 00:17:00.456 ************************************ 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:00.456 * Looking for test storage... 00:17:00.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:17:00.456 05:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.352 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:02.353 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:02.353 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:02.353 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:02.353 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:02.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:17:02.353 00:17:02.353 --- 10.0.0.2 ping statistics --- 00:17:02.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.353 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:17:02.353 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:17:02.353 00:17:02.353 --- 10.0.0.1 ping statistics --- 00:17:02.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.353 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1612419 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1612419 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1612419 ']' 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.354 05:36:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.354 [2024-07-25 05:36:55.913749] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:17:02.354 [2024-07-25 05:36:55.913834] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.354 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.354 [2024-07-25 05:36:55.979755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.612 [2024-07-25 05:36:56.069269] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.612 [2024-07-25 05:36:56.069323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.613 [2024-07-25 05:36:56.069339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.613 [2024-07-25 05:36:56.069352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.613 [2024-07-25 05:36:56.069364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.613 [2024-07-25 05:36:56.069401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.613 [2024-07-25 05:36:56.216978] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.613 [2024-07-25 05:36:56.233210] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.613 NULL1 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.613 05:36:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:02.613 [2024-07-25 05:36:56.278398] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:17:02.613 [2024-07-25 05:36:56.278445] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612514 ] 00:17:02.613 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.180 Attached to nqn.2016-06.io.spdk:cnode1 00:17:03.180 Namespace ID: 1 size: 1GB 00:17:03.180 fused_ordering(0) 00:17:03.180 fused_ordering(1) 00:17:03.180 fused_ordering(2) 00:17:03.180 fused_ordering(3) 00:17:03.180 fused_ordering(4) 00:17:03.180 fused_ordering(5) 00:17:03.180 fused_ordering(6) 00:17:03.180 fused_ordering(7) 00:17:03.180 fused_ordering(8) 00:17:03.180 fused_ordering(9) 00:17:03.180 fused_ordering(10) 00:17:03.180 fused_ordering(11) 00:17:03.180 fused_ordering(12) 00:17:03.180 fused_ordering(13) 00:17:03.180 fused_ordering(14) 00:17:03.180 fused_ordering(15) 00:17:03.180 fused_ordering(16) 00:17:03.180 fused_ordering(17) 00:17:03.180 fused_ordering(18) 00:17:03.180 fused_ordering(19) 00:17:03.180 fused_ordering(20) 00:17:03.180 fused_ordering(21) 00:17:03.180 fused_ordering(22) 00:17:03.180 fused_ordering(23) 00:17:03.180 fused_ordering(24) 00:17:03.180 fused_ordering(25) 00:17:03.180 fused_ordering(26) 00:17:03.180 fused_ordering(27) 00:17:03.180 fused_ordering(28) 00:17:03.180 fused_ordering(29) 00:17:03.180 fused_ordering(30) 00:17:03.180 fused_ordering(31) 00:17:03.180 fused_ordering(32) 00:17:03.180 fused_ordering(33) 00:17:03.180 fused_ordering(34) 00:17:03.180 fused_ordering(35) 00:17:03.180 fused_ordering(36) 00:17:03.180 fused_ordering(37) 00:17:03.180 fused_ordering(38) 00:17:03.180 fused_ordering(39) 00:17:03.180 fused_ordering(40) 00:17:03.180 fused_ordering(41) 00:17:03.180 fused_ordering(42) 00:17:03.181 fused_ordering(43) 00:17:03.181 fused_ordering(44) 00:17:03.181 fused_ordering(45) 00:17:03.181 fused_ordering(46) 00:17:03.181 fused_ordering(47) 00:17:03.181 fused_ordering(48) 00:17:03.181 fused_ordering(49) 00:17:03.181 fused_ordering(50) 00:17:03.181 fused_ordering(51) 00:17:03.181 fused_ordering(52) 00:17:03.181 fused_ordering(53) 00:17:03.181 fused_ordering(54) 00:17:03.181 fused_ordering(55) 00:17:03.181 fused_ordering(56) 00:17:03.181 fused_ordering(57) 00:17:03.181 fused_ordering(58) 00:17:03.181 fused_ordering(59) 00:17:03.181 fused_ordering(60) 00:17:03.181 fused_ordering(61) 00:17:03.181 fused_ordering(62) 00:17:03.181 fused_ordering(63) 00:17:03.181 fused_ordering(64) 00:17:03.181 fused_ordering(65) 00:17:03.181 fused_ordering(66) 00:17:03.181 fused_ordering(67) 00:17:03.181 fused_ordering(68) 00:17:03.181 fused_ordering(69) 00:17:03.181 fused_ordering(70) 00:17:03.181 fused_ordering(71) 00:17:03.181 fused_ordering(72) 00:17:03.181 fused_ordering(73) 00:17:03.181 fused_ordering(74) 00:17:03.181 fused_ordering(75) 00:17:03.181 fused_ordering(76) 00:17:03.181 fused_ordering(77) 00:17:03.181 fused_ordering(78) 00:17:03.181 fused_ordering(79) 00:17:03.181 fused_ordering(80) 00:17:03.181 fused_ordering(81) 00:17:03.181 fused_ordering(82) 00:17:03.181 fused_ordering(83) 00:17:03.181 fused_ordering(84) 00:17:03.181 fused_ordering(85) 00:17:03.181 fused_ordering(86) 00:17:03.181 fused_ordering(87) 00:17:03.181 fused_ordering(88) 00:17:03.181 fused_ordering(89) 00:17:03.181 fused_ordering(90) 00:17:03.181 fused_ordering(91) 00:17:03.181 fused_ordering(92) 00:17:03.181 fused_ordering(93) 00:17:03.181 fused_ordering(94) 00:17:03.181 fused_ordering(95) 00:17:03.181 fused_ordering(96) 00:17:03.181 fused_ordering(97) 00:17:03.181 fused_ordering(98) 00:17:03.181 fused_ordering(99) 00:17:03.181 fused_ordering(100) 00:17:03.181 fused_ordering(101) 00:17:03.181 fused_ordering(102) 00:17:03.181 fused_ordering(103) 00:17:03.181 fused_ordering(104) 00:17:03.181 fused_ordering(105) 00:17:03.181 fused_ordering(106) 00:17:03.181 fused_ordering(107) 00:17:03.181 fused_ordering(108) 00:17:03.181 fused_ordering(109) 00:17:03.181 fused_ordering(110) 00:17:03.181 fused_ordering(111) 00:17:03.181 fused_ordering(112) 00:17:03.181 fused_ordering(113) 00:17:03.181 fused_ordering(114) 00:17:03.181 fused_ordering(115) 00:17:03.181 fused_ordering(116) 00:17:03.181 fused_ordering(117) 00:17:03.181 fused_ordering(118) 00:17:03.181 fused_ordering(119) 00:17:03.181 fused_ordering(120) 00:17:03.181 fused_ordering(121) 00:17:03.181 fused_ordering(122) 00:17:03.181 fused_ordering(123) 00:17:03.181 fused_ordering(124) 00:17:03.181 fused_ordering(125) 00:17:03.181 fused_ordering(126) 00:17:03.181 fused_ordering(127) 00:17:03.181 fused_ordering(128) 00:17:03.181 fused_ordering(129) 00:17:03.181 fused_ordering(130) 00:17:03.181 fused_ordering(131) 00:17:03.181 fused_ordering(132) 00:17:03.181 fused_ordering(133) 00:17:03.181 fused_ordering(134) 00:17:03.181 fused_ordering(135) 00:17:03.181 fused_ordering(136) 00:17:03.181 fused_ordering(137) 00:17:03.181 fused_ordering(138) 00:17:03.181 fused_ordering(139) 00:17:03.181 fused_ordering(140) 00:17:03.181 fused_ordering(141) 00:17:03.181 fused_ordering(142) 00:17:03.181 fused_ordering(143) 00:17:03.181 fused_ordering(144) 00:17:03.181 fused_ordering(145) 00:17:03.181 fused_ordering(146) 00:17:03.181 fused_ordering(147) 00:17:03.181 fused_ordering(148) 00:17:03.181 fused_ordering(149) 00:17:03.181 fused_ordering(150) 00:17:03.181 fused_ordering(151) 00:17:03.181 fused_ordering(152) 00:17:03.181 fused_ordering(153) 00:17:03.181 fused_ordering(154) 00:17:03.181 fused_ordering(155) 00:17:03.181 fused_ordering(156) 00:17:03.181 fused_ordering(157) 00:17:03.181 fused_ordering(158) 00:17:03.181 fused_ordering(159) 00:17:03.181 fused_ordering(160) 00:17:03.181 fused_ordering(161) 00:17:03.181 fused_ordering(162) 00:17:03.181 fused_ordering(163) 00:17:03.181 fused_ordering(164) 00:17:03.181 fused_ordering(165) 00:17:03.181 fused_ordering(166) 00:17:03.181 fused_ordering(167) 00:17:03.181 fused_ordering(168) 00:17:03.181 fused_ordering(169) 00:17:03.181 fused_ordering(170) 00:17:03.181 fused_ordering(171) 00:17:03.181 fused_ordering(172) 00:17:03.181 fused_ordering(173) 00:17:03.181 fused_ordering(174) 00:17:03.181 fused_ordering(175) 00:17:03.181 fused_ordering(176) 00:17:03.181 fused_ordering(177) 00:17:03.181 fused_ordering(178) 00:17:03.181 fused_ordering(179) 00:17:03.181 fused_ordering(180) 00:17:03.181 fused_ordering(181) 00:17:03.181 fused_ordering(182) 00:17:03.181 fused_ordering(183) 00:17:03.181 fused_ordering(184) 00:17:03.181 fused_ordering(185) 00:17:03.181 fused_ordering(186) 00:17:03.181 fused_ordering(187) 00:17:03.181 fused_ordering(188) 00:17:03.181 fused_ordering(189) 00:17:03.181 fused_ordering(190) 00:17:03.181 fused_ordering(191) 00:17:03.181 fused_ordering(192) 00:17:03.181 fused_ordering(193) 00:17:03.181 fused_ordering(194) 00:17:03.181 fused_ordering(195) 00:17:03.181 fused_ordering(196) 00:17:03.181 fused_ordering(197) 00:17:03.181 fused_ordering(198) 00:17:03.181 fused_ordering(199) 00:17:03.181 fused_ordering(200) 00:17:03.181 fused_ordering(201) 00:17:03.181 fused_ordering(202) 00:17:03.181 fused_ordering(203) 00:17:03.181 fused_ordering(204) 00:17:03.181 fused_ordering(205) 00:17:03.748 fused_ordering(206) 00:17:03.748 fused_ordering(207) 00:17:03.748 fused_ordering(208) 00:17:03.748 fused_ordering(209) 00:17:03.748 fused_ordering(210) 00:17:03.748 fused_ordering(211) 00:17:03.748 fused_ordering(212) 00:17:03.748 fused_ordering(213) 00:17:03.748 fused_ordering(214) 00:17:03.748 fused_ordering(215) 00:17:03.748 fused_ordering(216) 00:17:03.748 fused_ordering(217) 00:17:03.748 fused_ordering(218) 00:17:03.748 fused_ordering(219) 00:17:03.748 fused_ordering(220) 00:17:03.748 fused_ordering(221) 00:17:03.748 fused_ordering(222) 00:17:03.748 fused_ordering(223) 00:17:03.748 fused_ordering(224) 00:17:03.748 fused_ordering(225) 00:17:03.748 fused_ordering(226) 00:17:03.748 fused_ordering(227) 00:17:03.748 fused_ordering(228) 00:17:03.748 fused_ordering(229) 00:17:03.748 fused_ordering(230) 00:17:03.748 fused_ordering(231) 00:17:03.748 fused_ordering(232) 00:17:03.748 fused_ordering(233) 00:17:03.748 fused_ordering(234) 00:17:03.748 fused_ordering(235) 00:17:03.748 fused_ordering(236) 00:17:03.748 fused_ordering(237) 00:17:03.748 fused_ordering(238) 00:17:03.748 fused_ordering(239) 00:17:03.748 fused_ordering(240) 00:17:03.748 fused_ordering(241) 00:17:03.748 fused_ordering(242) 00:17:03.748 fused_ordering(243) 00:17:03.748 fused_ordering(244) 00:17:03.748 fused_ordering(245) 00:17:03.748 fused_ordering(246) 00:17:03.748 fused_ordering(247) 00:17:03.748 fused_ordering(248) 00:17:03.748 fused_ordering(249) 00:17:03.748 fused_ordering(250) 00:17:03.748 fused_ordering(251) 00:17:03.748 fused_ordering(252) 00:17:03.748 fused_ordering(253) 00:17:03.748 fused_ordering(254) 00:17:03.748 fused_ordering(255) 00:17:03.748 fused_ordering(256) 00:17:03.748 fused_ordering(257) 00:17:03.748 fused_ordering(258) 00:17:03.748 fused_ordering(259) 00:17:03.748 fused_ordering(260) 00:17:03.748 fused_ordering(261) 00:17:03.748 fused_ordering(262) 00:17:03.748 fused_ordering(263) 00:17:03.748 fused_ordering(264) 00:17:03.748 fused_ordering(265) 00:17:03.748 fused_ordering(266) 00:17:03.748 fused_ordering(267) 00:17:03.748 fused_ordering(268) 00:17:03.748 fused_ordering(269) 00:17:03.748 fused_ordering(270) 00:17:03.748 fused_ordering(271) 00:17:03.748 fused_ordering(272) 00:17:03.749 fused_ordering(273) 00:17:03.749 fused_ordering(274) 00:17:03.749 fused_ordering(275) 00:17:03.749 fused_ordering(276) 00:17:03.749 fused_ordering(277) 00:17:03.749 fused_ordering(278) 00:17:03.749 fused_ordering(279) 00:17:03.749 fused_ordering(280) 00:17:03.749 fused_ordering(281) 00:17:03.749 fused_ordering(282) 00:17:03.749 fused_ordering(283) 00:17:03.749 fused_ordering(284) 00:17:03.749 fused_ordering(285) 00:17:03.749 fused_ordering(286) 00:17:03.749 fused_ordering(287) 00:17:03.749 fused_ordering(288) 00:17:03.749 fused_ordering(289) 00:17:03.749 fused_ordering(290) 00:17:03.749 fused_ordering(291) 00:17:03.749 fused_ordering(292) 00:17:03.749 fused_ordering(293) 00:17:03.749 fused_ordering(294) 00:17:03.749 fused_ordering(295) 00:17:03.749 fused_ordering(296) 00:17:03.749 fused_ordering(297) 00:17:03.749 fused_ordering(298) 00:17:03.749 fused_ordering(299) 00:17:03.749 fused_ordering(300) 00:17:03.749 fused_ordering(301) 00:17:03.749 fused_ordering(302) 00:17:03.749 fused_ordering(303) 00:17:03.749 fused_ordering(304) 00:17:03.749 fused_ordering(305) 00:17:03.749 fused_ordering(306) 00:17:03.749 fused_ordering(307) 00:17:03.749 fused_ordering(308) 00:17:03.749 fused_ordering(309) 00:17:03.749 fused_ordering(310) 00:17:03.749 fused_ordering(311) 00:17:03.749 fused_ordering(312) 00:17:03.749 fused_ordering(313) 00:17:03.749 fused_ordering(314) 00:17:03.749 fused_ordering(315) 00:17:03.749 fused_ordering(316) 00:17:03.749 fused_ordering(317) 00:17:03.749 fused_ordering(318) 00:17:03.749 fused_ordering(319) 00:17:03.749 fused_ordering(320) 00:17:03.749 fused_ordering(321) 00:17:03.749 fused_ordering(322) 00:17:03.749 fused_ordering(323) 00:17:03.749 fused_ordering(324) 00:17:03.749 fused_ordering(325) 00:17:03.749 fused_ordering(326) 00:17:03.749 fused_ordering(327) 00:17:03.749 fused_ordering(328) 00:17:03.749 fused_ordering(329) 00:17:03.749 fused_ordering(330) 00:17:03.749 fused_ordering(331) 00:17:03.749 fused_ordering(332) 00:17:03.749 fused_ordering(333) 00:17:03.749 fused_ordering(334) 00:17:03.749 fused_ordering(335) 00:17:03.749 fused_ordering(336) 00:17:03.749 fused_ordering(337) 00:17:03.749 fused_ordering(338) 00:17:03.749 fused_ordering(339) 00:17:03.749 fused_ordering(340) 00:17:03.749 fused_ordering(341) 00:17:03.749 fused_ordering(342) 00:17:03.749 fused_ordering(343) 00:17:03.749 fused_ordering(344) 00:17:03.749 fused_ordering(345) 00:17:03.749 fused_ordering(346) 00:17:03.749 fused_ordering(347) 00:17:03.749 fused_ordering(348) 00:17:03.749 fused_ordering(349) 00:17:03.749 fused_ordering(350) 00:17:03.749 fused_ordering(351) 00:17:03.749 fused_ordering(352) 00:17:03.749 fused_ordering(353) 00:17:03.749 fused_ordering(354) 00:17:03.749 fused_ordering(355) 00:17:03.749 fused_ordering(356) 00:17:03.749 fused_ordering(357) 00:17:03.749 fused_ordering(358) 00:17:03.749 fused_ordering(359) 00:17:03.749 fused_ordering(360) 00:17:03.749 fused_ordering(361) 00:17:03.749 fused_ordering(362) 00:17:03.749 fused_ordering(363) 00:17:03.749 fused_ordering(364) 00:17:03.749 fused_ordering(365) 00:17:03.749 fused_ordering(366) 00:17:03.749 fused_ordering(367) 00:17:03.749 fused_ordering(368) 00:17:03.749 fused_ordering(369) 00:17:03.749 fused_ordering(370) 00:17:03.749 fused_ordering(371) 00:17:03.749 fused_ordering(372) 00:17:03.749 fused_ordering(373) 00:17:03.749 fused_ordering(374) 00:17:03.749 fused_ordering(375) 00:17:03.749 fused_ordering(376) 00:17:03.749 fused_ordering(377) 00:17:03.749 fused_ordering(378) 00:17:03.749 fused_ordering(379) 00:17:03.749 fused_ordering(380) 00:17:03.749 fused_ordering(381) 00:17:03.749 fused_ordering(382) 00:17:03.749 fused_ordering(383) 00:17:03.749 fused_ordering(384) 00:17:03.749 fused_ordering(385) 00:17:03.749 fused_ordering(386) 00:17:03.749 fused_ordering(387) 00:17:03.749 fused_ordering(388) 00:17:03.749 fused_ordering(389) 00:17:03.749 fused_ordering(390) 00:17:03.749 fused_ordering(391) 00:17:03.749 fused_ordering(392) 00:17:03.749 fused_ordering(393) 00:17:03.749 fused_ordering(394) 00:17:03.749 fused_ordering(395) 00:17:03.749 fused_ordering(396) 00:17:03.749 fused_ordering(397) 00:17:03.749 fused_ordering(398) 00:17:03.749 fused_ordering(399) 00:17:03.749 fused_ordering(400) 00:17:03.749 fused_ordering(401) 00:17:03.749 fused_ordering(402) 00:17:03.749 fused_ordering(403) 00:17:03.749 fused_ordering(404) 00:17:03.749 fused_ordering(405) 00:17:03.749 fused_ordering(406) 00:17:03.749 fused_ordering(407) 00:17:03.749 fused_ordering(408) 00:17:03.749 fused_ordering(409) 00:17:03.749 fused_ordering(410) 00:17:04.316 fused_ordering(411) 00:17:04.316 fused_ordering(412) 00:17:04.316 fused_ordering(413) 00:17:04.316 fused_ordering(414) 00:17:04.316 fused_ordering(415) 00:17:04.316 fused_ordering(416) 00:17:04.316 fused_ordering(417) 00:17:04.316 fused_ordering(418) 00:17:04.316 fused_ordering(419) 00:17:04.316 fused_ordering(420) 00:17:04.316 fused_ordering(421) 00:17:04.316 fused_ordering(422) 00:17:04.316 fused_ordering(423) 00:17:04.316 fused_ordering(424) 00:17:04.316 fused_ordering(425) 00:17:04.316 fused_ordering(426) 00:17:04.316 fused_ordering(427) 00:17:04.316 fused_ordering(428) 00:17:04.316 fused_ordering(429) 00:17:04.316 fused_ordering(430) 00:17:04.316 fused_ordering(431) 00:17:04.316 fused_ordering(432) 00:17:04.316 fused_ordering(433) 00:17:04.316 fused_ordering(434) 00:17:04.316 fused_ordering(435) 00:17:04.316 fused_ordering(436) 00:17:04.316 fused_ordering(437) 00:17:04.316 fused_ordering(438) 00:17:04.316 fused_ordering(439) 00:17:04.316 fused_ordering(440) 00:17:04.316 fused_ordering(441) 00:17:04.316 fused_ordering(442) 00:17:04.316 fused_ordering(443) 00:17:04.316 fused_ordering(444) 00:17:04.316 fused_ordering(445) 00:17:04.316 fused_ordering(446) 00:17:04.316 fused_ordering(447) 00:17:04.316 fused_ordering(448) 00:17:04.316 fused_ordering(449) 00:17:04.316 fused_ordering(450) 00:17:04.316 fused_ordering(451) 00:17:04.316 fused_ordering(452) 00:17:04.316 fused_ordering(453) 00:17:04.316 fused_ordering(454) 00:17:04.316 fused_ordering(455) 00:17:04.316 fused_ordering(456) 00:17:04.316 fused_ordering(457) 00:17:04.316 fused_ordering(458) 00:17:04.316 fused_ordering(459) 00:17:04.316 fused_ordering(460) 00:17:04.316 fused_ordering(461) 00:17:04.316 fused_ordering(462) 00:17:04.316 fused_ordering(463) 00:17:04.316 fused_ordering(464) 00:17:04.316 fused_ordering(465) 00:17:04.316 fused_ordering(466) 00:17:04.316 fused_ordering(467) 00:17:04.316 fused_ordering(468) 00:17:04.316 fused_ordering(469) 00:17:04.316 fused_ordering(470) 00:17:04.316 fused_ordering(471) 00:17:04.316 fused_ordering(472) 00:17:04.316 fused_ordering(473) 00:17:04.316 fused_ordering(474) 00:17:04.316 fused_ordering(475) 00:17:04.316 fused_ordering(476) 00:17:04.316 fused_ordering(477) 00:17:04.316 fused_ordering(478) 00:17:04.316 fused_ordering(479) 00:17:04.316 fused_ordering(480) 00:17:04.316 fused_ordering(481) 00:17:04.316 fused_ordering(482) 00:17:04.316 fused_ordering(483) 00:17:04.316 fused_ordering(484) 00:17:04.316 fused_ordering(485) 00:17:04.316 fused_ordering(486) 00:17:04.316 fused_ordering(487) 00:17:04.316 fused_ordering(488) 00:17:04.316 fused_ordering(489) 00:17:04.316 fused_ordering(490) 00:17:04.316 fused_ordering(491) 00:17:04.316 fused_ordering(492) 00:17:04.316 fused_ordering(493) 00:17:04.316 fused_ordering(494) 00:17:04.316 fused_ordering(495) 00:17:04.316 fused_ordering(496) 00:17:04.316 fused_ordering(497) 00:17:04.316 fused_ordering(498) 00:17:04.316 fused_ordering(499) 00:17:04.316 fused_ordering(500) 00:17:04.316 fused_ordering(501) 00:17:04.316 fused_ordering(502) 00:17:04.316 fused_ordering(503) 00:17:04.316 fused_ordering(504) 00:17:04.316 fused_ordering(505) 00:17:04.316 fused_ordering(506) 00:17:04.316 fused_ordering(507) 00:17:04.316 fused_ordering(508) 00:17:04.316 fused_ordering(509) 00:17:04.316 fused_ordering(510) 00:17:04.316 fused_ordering(511) 00:17:04.316 fused_ordering(512) 00:17:04.316 fused_ordering(513) 00:17:04.316 fused_ordering(514) 00:17:04.316 fused_ordering(515) 00:17:04.316 fused_ordering(516) 00:17:04.316 fused_ordering(517) 00:17:04.316 fused_ordering(518) 00:17:04.316 fused_ordering(519) 00:17:04.316 fused_ordering(520) 00:17:04.316 fused_ordering(521) 00:17:04.316 fused_ordering(522) 00:17:04.316 fused_ordering(523) 00:17:04.316 fused_ordering(524) 00:17:04.316 fused_ordering(525) 00:17:04.316 fused_ordering(526) 00:17:04.316 fused_ordering(527) 00:17:04.316 fused_ordering(528) 00:17:04.316 fused_ordering(529) 00:17:04.316 fused_ordering(530) 00:17:04.316 fused_ordering(531) 00:17:04.316 fused_ordering(532) 00:17:04.316 fused_ordering(533) 00:17:04.316 fused_ordering(534) 00:17:04.316 fused_ordering(535) 00:17:04.316 fused_ordering(536) 00:17:04.316 fused_ordering(537) 00:17:04.316 fused_ordering(538) 00:17:04.316 fused_ordering(539) 00:17:04.316 fused_ordering(540) 00:17:04.316 fused_ordering(541) 00:17:04.316 fused_ordering(542) 00:17:04.316 fused_ordering(543) 00:17:04.316 fused_ordering(544) 00:17:04.316 fused_ordering(545) 00:17:04.316 fused_ordering(546) 00:17:04.316 fused_ordering(547) 00:17:04.316 fused_ordering(548) 00:17:04.316 fused_ordering(549) 00:17:04.316 fused_ordering(550) 00:17:04.316 fused_ordering(551) 00:17:04.316 fused_ordering(552) 00:17:04.316 fused_ordering(553) 00:17:04.316 fused_ordering(554) 00:17:04.316 fused_ordering(555) 00:17:04.316 fused_ordering(556) 00:17:04.316 fused_ordering(557) 00:17:04.316 fused_ordering(558) 00:17:04.316 fused_ordering(559) 00:17:04.316 fused_ordering(560) 00:17:04.316 fused_ordering(561) 00:17:04.316 fused_ordering(562) 00:17:04.317 fused_ordering(563) 00:17:04.317 fused_ordering(564) 00:17:04.317 fused_ordering(565) 00:17:04.317 fused_ordering(566) 00:17:04.317 fused_ordering(567) 00:17:04.317 fused_ordering(568) 00:17:04.317 fused_ordering(569) 00:17:04.317 fused_ordering(570) 00:17:04.317 fused_ordering(571) 00:17:04.317 fused_ordering(572) 00:17:04.317 fused_ordering(573) 00:17:04.317 fused_ordering(574) 00:17:04.317 fused_ordering(575) 00:17:04.317 fused_ordering(576) 00:17:04.317 fused_ordering(577) 00:17:04.317 fused_ordering(578) 00:17:04.317 fused_ordering(579) 00:17:04.317 fused_ordering(580) 00:17:04.317 fused_ordering(581) 00:17:04.317 fused_ordering(582) 00:17:04.317 fused_ordering(583) 00:17:04.317 fused_ordering(584) 00:17:04.317 fused_ordering(585) 00:17:04.317 fused_ordering(586) 00:17:04.317 fused_ordering(587) 00:17:04.317 fused_ordering(588) 00:17:04.317 fused_ordering(589) 00:17:04.317 fused_ordering(590) 00:17:04.317 fused_ordering(591) 00:17:04.317 fused_ordering(592) 00:17:04.317 fused_ordering(593) 00:17:04.317 fused_ordering(594) 00:17:04.317 fused_ordering(595) 00:17:04.317 fused_ordering(596) 00:17:04.317 fused_ordering(597) 00:17:04.317 fused_ordering(598) 00:17:04.317 fused_ordering(599) 00:17:04.317 fused_ordering(600) 00:17:04.317 fused_ordering(601) 00:17:04.317 fused_ordering(602) 00:17:04.317 fused_ordering(603) 00:17:04.317 fused_ordering(604) 00:17:04.317 fused_ordering(605) 00:17:04.317 fused_ordering(606) 00:17:04.317 fused_ordering(607) 00:17:04.317 fused_ordering(608) 00:17:04.317 fused_ordering(609) 00:17:04.317 fused_ordering(610) 00:17:04.317 fused_ordering(611) 00:17:04.317 fused_ordering(612) 00:17:04.317 fused_ordering(613) 00:17:04.317 fused_ordering(614) 00:17:04.317 fused_ordering(615) 00:17:04.883 fused_ordering(616) 00:17:04.883 fused_ordering(617) 00:17:04.883 fused_ordering(618) 00:17:04.883 fused_ordering(619) 00:17:04.883 fused_ordering(620) 00:17:04.883 fused_ordering(621) 00:17:04.883 fused_ordering(622) 00:17:04.883 fused_ordering(623) 00:17:04.883 fused_ordering(624) 00:17:04.883 fused_ordering(625) 00:17:04.883 fused_ordering(626) 00:17:04.883 fused_ordering(627) 00:17:04.883 fused_ordering(628) 00:17:04.883 fused_ordering(629) 00:17:04.883 fused_ordering(630) 00:17:04.883 fused_ordering(631) 00:17:04.883 fused_ordering(632) 00:17:04.883 fused_ordering(633) 00:17:04.883 fused_ordering(634) 00:17:04.883 fused_ordering(635) 00:17:04.883 fused_ordering(636) 00:17:04.883 fused_ordering(637) 00:17:04.883 fused_ordering(638) 00:17:04.883 fused_ordering(639) 00:17:04.883 fused_ordering(640) 00:17:04.883 fused_ordering(641) 00:17:04.883 fused_ordering(642) 00:17:04.883 fused_ordering(643) 00:17:04.883 fused_ordering(644) 00:17:04.883 fused_ordering(645) 00:17:04.883 fused_ordering(646) 00:17:04.883 fused_ordering(647) 00:17:04.883 fused_ordering(648) 00:17:04.883 fused_ordering(649) 00:17:04.883 fused_ordering(650) 00:17:04.883 fused_ordering(651) 00:17:04.883 fused_ordering(652) 00:17:04.883 fused_ordering(653) 00:17:04.883 fused_ordering(654) 00:17:04.883 fused_ordering(655) 00:17:04.883 fused_ordering(656) 00:17:04.883 fused_ordering(657) 00:17:04.883 fused_ordering(658) 00:17:04.883 fused_ordering(659) 00:17:04.884 fused_ordering(660) 00:17:04.884 fused_ordering(661) 00:17:04.884 fused_ordering(662) 00:17:04.884 fused_ordering(663) 00:17:04.884 fused_ordering(664) 00:17:04.884 fused_ordering(665) 00:17:04.884 fused_ordering(666) 00:17:04.884 fused_ordering(667) 00:17:04.884 fused_ordering(668) 00:17:04.884 fused_ordering(669) 00:17:04.884 fused_ordering(670) 00:17:04.884 fused_ordering(671) 00:17:04.884 fused_ordering(672) 00:17:04.884 fused_ordering(673) 00:17:04.884 fused_ordering(674) 00:17:04.884 fused_ordering(675) 00:17:04.884 fused_ordering(676) 00:17:04.884 fused_ordering(677) 00:17:04.884 fused_ordering(678) 00:17:04.884 fused_ordering(679) 00:17:04.884 fused_ordering(680) 00:17:04.884 fused_ordering(681) 00:17:04.884 fused_ordering(682) 00:17:04.884 fused_ordering(683) 00:17:04.884 fused_ordering(684) 00:17:04.884 fused_ordering(685) 00:17:04.884 fused_ordering(686) 00:17:04.884 fused_ordering(687) 00:17:04.884 fused_ordering(688) 00:17:04.884 fused_ordering(689) 00:17:04.884 fused_ordering(690) 00:17:04.884 fused_ordering(691) 00:17:04.884 fused_ordering(692) 00:17:04.884 fused_ordering(693) 00:17:04.884 fused_ordering(694) 00:17:04.884 fused_ordering(695) 00:17:04.884 fused_ordering(696) 00:17:04.884 fused_ordering(697) 00:17:04.884 fused_ordering(698) 00:17:04.884 fused_ordering(699) 00:17:04.884 fused_ordering(700) 00:17:04.884 fused_ordering(701) 00:17:04.884 fused_ordering(702) 00:17:04.884 fused_ordering(703) 00:17:04.884 fused_ordering(704) 00:17:04.884 fused_ordering(705) 00:17:04.884 fused_ordering(706) 00:17:04.884 fused_ordering(707) 00:17:04.884 fused_ordering(708) 00:17:04.884 fused_ordering(709) 00:17:04.884 fused_ordering(710) 00:17:04.884 fused_ordering(711) 00:17:04.884 fused_ordering(712) 00:17:04.884 fused_ordering(713) 00:17:04.884 fused_ordering(714) 00:17:04.884 fused_ordering(715) 00:17:04.884 fused_ordering(716) 00:17:04.884 fused_ordering(717) 00:17:04.884 fused_ordering(718) 00:17:04.884 fused_ordering(719) 00:17:04.884 fused_ordering(720) 00:17:04.884 fused_ordering(721) 00:17:04.884 fused_ordering(722) 00:17:04.884 fused_ordering(723) 00:17:04.884 fused_ordering(724) 00:17:04.884 fused_ordering(725) 00:17:04.884 fused_ordering(726) 00:17:04.884 fused_ordering(727) 00:17:04.884 fused_ordering(728) 00:17:04.884 fused_ordering(729) 00:17:04.884 fused_ordering(730) 00:17:04.884 fused_ordering(731) 00:17:04.884 fused_ordering(732) 00:17:04.884 fused_ordering(733) 00:17:04.884 fused_ordering(734) 00:17:04.884 fused_ordering(735) 00:17:04.884 fused_ordering(736) 00:17:04.884 fused_ordering(737) 00:17:04.884 fused_ordering(738) 00:17:04.884 fused_ordering(739) 00:17:04.884 fused_ordering(740) 00:17:04.884 fused_ordering(741) 00:17:04.884 fused_ordering(742) 00:17:04.884 fused_ordering(743) 00:17:04.884 fused_ordering(744) 00:17:04.884 fused_ordering(745) 00:17:04.884 fused_ordering(746) 00:17:04.884 fused_ordering(747) 00:17:04.884 fused_ordering(748) 00:17:04.884 fused_ordering(749) 00:17:04.884 fused_ordering(750) 00:17:04.884 fused_ordering(751) 00:17:04.884 fused_ordering(752) 00:17:04.884 fused_ordering(753) 00:17:04.884 fused_ordering(754) 00:17:04.884 fused_ordering(755) 00:17:04.884 fused_ordering(756) 00:17:04.884 fused_ordering(757) 00:17:04.884 fused_ordering(758) 00:17:04.884 fused_ordering(759) 00:17:04.884 fused_ordering(760) 00:17:04.884 fused_ordering(761) 00:17:04.884 fused_ordering(762) 00:17:04.884 fused_ordering(763) 00:17:04.884 fused_ordering(764) 00:17:04.884 fused_ordering(765) 00:17:04.884 fused_ordering(766) 00:17:04.884 fused_ordering(767) 00:17:04.884 fused_ordering(768) 00:17:04.884 fused_ordering(769) 00:17:04.884 fused_ordering(770) 00:17:04.884 fused_ordering(771) 00:17:04.884 fused_ordering(772) 00:17:04.884 fused_ordering(773) 00:17:04.884 fused_ordering(774) 00:17:04.884 fused_ordering(775) 00:17:04.884 fused_ordering(776) 00:17:04.884 fused_ordering(777) 00:17:04.884 fused_ordering(778) 00:17:04.884 fused_ordering(779) 00:17:04.884 fused_ordering(780) 00:17:04.884 fused_ordering(781) 00:17:04.884 fused_ordering(782) 00:17:04.884 fused_ordering(783) 00:17:04.884 fused_ordering(784) 00:17:04.884 fused_ordering(785) 00:17:04.884 fused_ordering(786) 00:17:04.884 fused_ordering(787) 00:17:04.884 fused_ordering(788) 00:17:04.884 fused_ordering(789) 00:17:04.884 fused_ordering(790) 00:17:04.884 fused_ordering(791) 00:17:04.884 fused_ordering(792) 00:17:04.884 fused_ordering(793) 00:17:04.884 fused_ordering(794) 00:17:04.884 fused_ordering(795) 00:17:04.884 fused_ordering(796) 00:17:04.884 fused_ordering(797) 00:17:04.884 fused_ordering(798) 00:17:04.884 fused_ordering(799) 00:17:04.884 fused_ordering(800) 00:17:04.884 fused_ordering(801) 00:17:04.884 fused_ordering(802) 00:17:04.884 fused_ordering(803) 00:17:04.884 fused_ordering(804) 00:17:04.884 fused_ordering(805) 00:17:04.884 fused_ordering(806) 00:17:04.884 fused_ordering(807) 00:17:04.884 fused_ordering(808) 00:17:04.884 fused_ordering(809) 00:17:04.884 fused_ordering(810) 00:17:04.884 fused_ordering(811) 00:17:04.884 fused_ordering(812) 00:17:04.884 fused_ordering(813) 00:17:04.884 fused_ordering(814) 00:17:04.884 fused_ordering(815) 00:17:04.884 fused_ordering(816) 00:17:04.884 fused_ordering(817) 00:17:04.884 fused_ordering(818) 00:17:04.884 fused_ordering(819) 00:17:04.884 fused_ordering(820) 00:17:05.820 fused_ordering(821) 00:17:05.820 fused_ordering(822) 00:17:05.820 fused_ordering(823) 00:17:05.820 fused_ordering(824) 00:17:05.820 fused_ordering(825) 00:17:05.820 fused_ordering(826) 00:17:05.820 fused_ordering(827) 00:17:05.820 fused_ordering(828) 00:17:05.820 fused_ordering(829) 00:17:05.820 fused_ordering(830) 00:17:05.820 fused_ordering(831) 00:17:05.820 fused_ordering(832) 00:17:05.820 fused_ordering(833) 00:17:05.820 fused_ordering(834) 00:17:05.820 fused_ordering(835) 00:17:05.820 fused_ordering(836) 00:17:05.820 fused_ordering(837) 00:17:05.820 fused_ordering(838) 00:17:05.820 fused_ordering(839) 00:17:05.820 fused_ordering(840) 00:17:05.820 fused_ordering(841) 00:17:05.820 fused_ordering(842) 00:17:05.820 fused_ordering(843) 00:17:05.820 fused_ordering(844) 00:17:05.820 fused_ordering(845) 00:17:05.820 fused_ordering(846) 00:17:05.820 fused_ordering(847) 00:17:05.820 fused_ordering(848) 00:17:05.820 fused_ordering(849) 00:17:05.820 fused_ordering(850) 00:17:05.820 fused_ordering(851) 00:17:05.820 fused_ordering(852) 00:17:05.820 fused_ordering(853) 00:17:05.820 fused_ordering(854) 00:17:05.820 fused_ordering(855) 00:17:05.821 fused_ordering(856) 00:17:05.821 fused_ordering(857) 00:17:05.821 fused_ordering(858) 00:17:05.821 fused_ordering(859) 00:17:05.821 fused_ordering(860) 00:17:05.821 fused_ordering(861) 00:17:05.821 fused_ordering(862) 00:17:05.821 fused_ordering(863) 00:17:05.821 fused_ordering(864) 00:17:05.821 fused_ordering(865) 00:17:05.821 fused_ordering(866) 00:17:05.821 fused_ordering(867) 00:17:05.821 fused_ordering(868) 00:17:05.821 fused_ordering(869) 00:17:05.821 fused_ordering(870) 00:17:05.821 fused_ordering(871) 00:17:05.821 fused_ordering(872) 00:17:05.821 fused_ordering(873) 00:17:05.821 fused_ordering(874) 00:17:05.821 fused_ordering(875) 00:17:05.821 fused_ordering(876) 00:17:05.821 fused_ordering(877) 00:17:05.821 fused_ordering(878) 00:17:05.821 fused_ordering(879) 00:17:05.821 fused_ordering(880) 00:17:05.821 fused_ordering(881) 00:17:05.821 fused_ordering(882) 00:17:05.821 fused_ordering(883) 00:17:05.821 fused_ordering(884) 00:17:05.821 fused_ordering(885) 00:17:05.821 fused_ordering(886) 00:17:05.821 fused_ordering(887) 00:17:05.821 fused_ordering(888) 00:17:05.821 fused_ordering(889) 00:17:05.821 fused_ordering(890) 00:17:05.821 fused_ordering(891) 00:17:05.821 fused_ordering(892) 00:17:05.821 fused_ordering(893) 00:17:05.821 fused_ordering(894) 00:17:05.821 fused_ordering(895) 00:17:05.821 fused_ordering(896) 00:17:05.821 fused_ordering(897) 00:17:05.821 fused_ordering(898) 00:17:05.821 fused_ordering(899) 00:17:05.821 fused_ordering(900) 00:17:05.821 fused_ordering(901) 00:17:05.821 fused_ordering(902) 00:17:05.821 fused_ordering(903) 00:17:05.821 fused_ordering(904) 00:17:05.821 fused_ordering(905) 00:17:05.821 fused_ordering(906) 00:17:05.821 fused_ordering(907) 00:17:05.821 fused_ordering(908) 00:17:05.821 fused_ordering(909) 00:17:05.821 fused_ordering(910) 00:17:05.821 fused_ordering(911) 00:17:05.821 fused_ordering(912) 00:17:05.821 fused_ordering(913) 00:17:05.821 fused_ordering(914) 00:17:05.821 fused_ordering(915) 00:17:05.821 fused_ordering(916) 00:17:05.821 fused_ordering(917) 00:17:05.821 fused_ordering(918) 00:17:05.821 fused_ordering(919) 00:17:05.821 fused_ordering(920) 00:17:05.821 fused_ordering(921) 00:17:05.821 fused_ordering(922) 00:17:05.821 fused_ordering(923) 00:17:05.821 fused_ordering(924) 00:17:05.821 fused_ordering(925) 00:17:05.821 fused_ordering(926) 00:17:05.821 fused_ordering(927) 00:17:05.821 fused_ordering(928) 00:17:05.821 fused_ordering(929) 00:17:05.821 fused_ordering(930) 00:17:05.821 fused_ordering(931) 00:17:05.821 fused_ordering(932) 00:17:05.821 fused_ordering(933) 00:17:05.821 fused_ordering(934) 00:17:05.821 fused_ordering(935) 00:17:05.821 fused_ordering(936) 00:17:05.821 fused_ordering(937) 00:17:05.821 fused_ordering(938) 00:17:05.821 fused_ordering(939) 00:17:05.821 fused_ordering(940) 00:17:05.821 fused_ordering(941) 00:17:05.821 fused_ordering(942) 00:17:05.821 fused_ordering(943) 00:17:05.821 fused_ordering(944) 00:17:05.821 fused_ordering(945) 00:17:05.821 fused_ordering(946) 00:17:05.821 fused_ordering(947) 00:17:05.821 fused_ordering(948) 00:17:05.821 fused_ordering(949) 00:17:05.821 fused_ordering(950) 00:17:05.821 fused_ordering(951) 00:17:05.821 fused_ordering(952) 00:17:05.821 fused_ordering(953) 00:17:05.821 fused_ordering(954) 00:17:05.821 fused_ordering(955) 00:17:05.821 fused_ordering(956) 00:17:05.821 fused_ordering(957) 00:17:05.821 fused_ordering(958) 00:17:05.821 fused_ordering(959) 00:17:05.821 fused_ordering(960) 00:17:05.821 fused_ordering(961) 00:17:05.821 fused_ordering(962) 00:17:05.821 fused_ordering(963) 00:17:05.821 fused_ordering(964) 00:17:05.821 fused_ordering(965) 00:17:05.821 fused_ordering(966) 00:17:05.821 fused_ordering(967) 00:17:05.821 fused_ordering(968) 00:17:05.821 fused_ordering(969) 00:17:05.821 fused_ordering(970) 00:17:05.821 fused_ordering(971) 00:17:05.821 fused_ordering(972) 00:17:05.821 fused_ordering(973) 00:17:05.821 fused_ordering(974) 00:17:05.821 fused_ordering(975) 00:17:05.821 fused_ordering(976) 00:17:05.821 fused_ordering(977) 00:17:05.821 fused_ordering(978) 00:17:05.821 fused_ordering(979) 00:17:05.821 fused_ordering(980) 00:17:05.821 fused_ordering(981) 00:17:05.821 fused_ordering(982) 00:17:05.821 fused_ordering(983) 00:17:05.821 fused_ordering(984) 00:17:05.821 fused_ordering(985) 00:17:05.821 fused_ordering(986) 00:17:05.821 fused_ordering(987) 00:17:05.821 fused_ordering(988) 00:17:05.821 fused_ordering(989) 00:17:05.821 fused_ordering(990) 00:17:05.821 fused_ordering(991) 00:17:05.821 fused_ordering(992) 00:17:05.821 fused_ordering(993) 00:17:05.821 fused_ordering(994) 00:17:05.821 fused_ordering(995) 00:17:05.821 fused_ordering(996) 00:17:05.821 fused_ordering(997) 00:17:05.821 fused_ordering(998) 00:17:05.821 fused_ordering(999) 00:17:05.821 fused_ordering(1000) 00:17:05.821 fused_ordering(1001) 00:17:05.821 fused_ordering(1002) 00:17:05.821 fused_ordering(1003) 00:17:05.821 fused_ordering(1004) 00:17:05.821 fused_ordering(1005) 00:17:05.821 fused_ordering(1006) 00:17:05.821 fused_ordering(1007) 00:17:05.821 fused_ordering(1008) 00:17:05.821 fused_ordering(1009) 00:17:05.821 fused_ordering(1010) 00:17:05.821 fused_ordering(1011) 00:17:05.821 fused_ordering(1012) 00:17:05.821 fused_ordering(1013) 00:17:05.821 fused_ordering(1014) 00:17:05.821 fused_ordering(1015) 00:17:05.821 fused_ordering(1016) 00:17:05.821 fused_ordering(1017) 00:17:05.821 fused_ordering(1018) 00:17:05.821 fused_ordering(1019) 00:17:05.821 fused_ordering(1020) 00:17:05.821 fused_ordering(1021) 00:17:05.821 fused_ordering(1022) 00:17:05.821 fused_ordering(1023) 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.821 rmmod nvme_tcp 00:17:05.821 rmmod nvme_fabrics 00:17:05.821 rmmod nvme_keyring 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1612419 ']' 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1612419 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1612419 ']' 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1612419 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1612419 00:17:05.821 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:05.822 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:05.822 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1612419' 00:17:05.822 killing process with pid 1612419 00:17:05.822 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1612419 00:17:05.822 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1612419 00:17:06.081 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.081 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.081 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.081 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.081 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.081 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.081 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.081 05:36:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.980 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:07.980 00:17:07.980 real 0m7.922s 00:17:07.980 user 0m5.529s 00:17:07.980 sys 0m3.633s 00:17:07.980 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:07.980 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:07.980 ************************************ 00:17:07.980 END TEST nvmf_fused_ordering 00:17:07.980 ************************************ 00:17:07.980 05:37:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:07.980 05:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:07.980 05:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:07.981 05:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.981 ************************************ 00:17:07.981 START TEST nvmf_ns_masking 00:17:07.981 ************************************ 00:17:07.981 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:08.239 * Looking for test storage... 00:17:08.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=df3b0f2b-9849-412d-ada7-19a6072feb5e 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=611e3192-d1e1-4879-9989-1ee75856acbd 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4a308e0e-e363-42d1-b017-7ccf38290c22 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.239 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:10.135 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:10.136 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:10.136 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:10.136 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:10.136 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.136 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:17:10.394 00:17:10.394 --- 10.0.0.2 ping statistics --- 00:17:10.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.394 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:17:10.394 00:17:10.394 --- 10.0.0.1 ping statistics --- 00:17:10.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.394 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1614837 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1614837 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1614837 ']' 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:10.394 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:10.394 [2024-07-25 05:37:03.949748] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:17:10.394 [2024-07-25 05:37:03.949850] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.394 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.394 [2024-07-25 05:37:04.025136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.652 [2024-07-25 05:37:04.123027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.652 [2024-07-25 05:37:04.123086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.652 [2024-07-25 05:37:04.123102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.652 [2024-07-25 05:37:04.123116] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.652 [2024-07-25 05:37:04.123129] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.652 [2024-07-25 05:37:04.123161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.652 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:10.652 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:10.652 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.652 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:10.652 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:10.652 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.652 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:10.910 [2024-07-25 05:37:04.543368] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.910 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:10.910 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:10.910 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:11.168 Malloc1 00:17:11.168 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:11.426 Malloc2 00:17:11.426 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:11.683 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:11.941 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.198 [2024-07-25 05:37:05.832607] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.198 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:12.198 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4a308e0e-e363-42d1-b017-7ccf38290c22 -a 10.0.0.2 -s 4420 -i 4 00:17:12.456 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:12.456 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:12.456 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:12.456 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:12.456 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:14.985 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:14.985 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:14.985 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:14.985 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:14.985 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:14.985 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:14.985 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:14.986 [ 0]:0x1 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f1ae8bb2f96b4cbf801bd98171dc317e 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f1ae8bb2f96b4cbf801bd98171dc317e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:14.986 [ 0]:0x1 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f1ae8bb2f96b4cbf801bd98171dc317e 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f1ae8bb2f96b4cbf801bd98171dc317e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:14.986 [ 1]:0x2 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4b5a129273e4f48a42e872abe9c35a0 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4b5a129273e4f48a42e872abe9c35a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:14.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.986 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:15.244 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:15.502 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:15.502 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4a308e0e-e363-42d1-b017-7ccf38290c22 -a 10.0.0.2 -s 4420 -i 4 00:17:15.759 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:15.760 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:15.760 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.760 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:15.760 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:15.760 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:18.289 [ 0]:0x2 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4b5a129273e4f48a42e872abe9c35a0 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4b5a129273e4f48a42e872abe9c35a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:18.289 [ 0]:0x1 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f1ae8bb2f96b4cbf801bd98171dc317e 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f1ae8bb2f96b4cbf801bd98171dc317e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.289 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:18.289 [ 1]:0x2 00:17:18.290 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:18.290 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.290 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4b5a129273e4f48a42e872abe9c35a0 00:17:18.290 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4b5a129273e4f48a42e872abe9c35a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.290 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:18.548 [ 0]:0x2 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4b5a129273e4f48a42e872abe9c35a0 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4b5a129273e4f48a42e872abe9c35a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:18.548 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.810 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:19.070 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:19.070 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4a308e0e-e363-42d1-b017-7ccf38290c22 -a 10.0.0.2 -s 4420 -i 4 00:17:19.070 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:19.070 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:19.070 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:19.070 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:19.070 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:19.070 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:20.966 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:20.966 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:20.966 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:21.223 [ 0]:0x1 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f1ae8bb2f96b4cbf801bd98171dc317e 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f1ae8bb2f96b4cbf801bd98171dc317e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:21.223 [ 1]:0x2 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4b5a129273e4f48a42e872abe9c35a0 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4b5a129273e4f48a42e872abe9c35a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.223 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:21.480 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:21.481 [ 0]:0x2 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:21.481 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4b5a129273e4f48a42e872abe9c35a0 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4b5a129273e4f48a42e872abe9c35a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:21.738 [2024-07-25 05:37:15.413552] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:21.738 request: 00:17:21.738 { 00:17:21.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.738 "nsid": 2, 00:17:21.738 "host": "nqn.2016-06.io.spdk:host1", 00:17:21.738 "method": "nvmf_ns_remove_host", 00:17:21.738 "req_id": 1 00:17:21.738 } 00:17:21.738 Got JSON-RPC error response 00:17:21.738 response: 00:17:21.738 { 00:17:21.738 "code": -32602, 00:17:21.738 "message": "Invalid parameters" 00:17:21.738 } 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.738 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.995 [ 0]:0x2 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4b5a129273e4f48a42e872abe9c35a0 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4b5a129273e4f48a42e872abe9c35a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.995 [2024-07-25 05:37:15.578744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb874e0 is same with the state(5) to be set 00:17:21.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1616337 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1616337 /var/tmp/host.sock 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1616337 ']' 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:21.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:21.995 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:21.995 [2024-07-25 05:37:15.633540] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:17:21.995 [2024-07-25 05:37:15.633623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616337 ] 00:17:21.995 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.251 [2024-07-25 05:37:15.697978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.251 [2024-07-25 05:37:15.788962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.508 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.508 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:22.508 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:22.765 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:23.051 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid df3b0f2b-9849-412d-ada7-19a6072feb5e 00:17:23.051 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:23.051 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DF3B0F2B9849412DADA719A6072FEB5E -i 00:17:23.308 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 611e3192-d1e1-4879-9989-1ee75856acbd 00:17:23.308 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:23.308 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 611E3192D1E1487999891EE75856ACBD -i 00:17:23.564 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:23.821 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:24.078 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:24.078 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:24.335 nvme0n1 00:17:24.335 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:24.335 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:24.592 nvme1n2 00:17:24.850 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:24.850 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:24.850 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:24.850 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:24.850 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:25.107 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:25.107 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:25.107 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:25.107 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:25.364 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ df3b0f2b-9849-412d-ada7-19a6072feb5e == \d\f\3\b\0\f\2\b\-\9\8\4\9\-\4\1\2\d\-\a\d\a\7\-\1\9\a\6\0\7\2\f\e\b\5\e ]] 00:17:25.364 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:25.364 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:25.364 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:25.621 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 611e3192-d1e1-4879-9989-1ee75856acbd == \6\1\1\e\3\1\9\2\-\d\1\e\1\-\4\8\7\9\-\9\9\8\9\-\1\e\e\7\5\8\5\6\a\c\b\d ]] 00:17:25.621 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1616337 00:17:25.621 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1616337 ']' 00:17:25.621 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1616337 00:17:25.621 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:25.621 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.621 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1616337 00:17:25.621 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:25.621 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:25.621 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1616337' 00:17:25.621 killing process with pid 1616337 00:17:25.621 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1616337 00:17:25.621 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1616337 00:17:25.878 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.135 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:26.135 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:26.135 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.135 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:17:26.135 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.135 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:17:26.135 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.135 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.136 rmmod nvme_tcp 00:17:26.393 rmmod nvme_fabrics 00:17:26.393 rmmod nvme_keyring 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1614837 ']' 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1614837 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1614837 ']' 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1614837 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1614837 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1614837' 00:17:26.393 killing process with pid 1614837 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1614837 00:17:26.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1614837 00:17:26.651 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:26.651 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:26.651 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:26.651 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.651 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.651 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.651 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.651 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.185 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:29.185 00:17:29.185 real 0m20.622s 00:17:29.185 user 0m26.898s 00:17:29.185 sys 0m4.033s 00:17:29.185 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:29.185 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:29.185 ************************************ 00:17:29.185 END TEST nvmf_ns_masking 00:17:29.185 ************************************ 00:17:29.185 05:37:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:29.185 05:37:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:29.185 05:37:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:29.185 05:37:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:29.186 ************************************ 00:17:29.186 START TEST nvmf_nvme_cli 00:17:29.186 ************************************ 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:29.186 * Looking for test storage... 00:17:29.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:17:29.186 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:31.089 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:31.089 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:31.089 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.089 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:31.090 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:31.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:17:31.090 00:17:31.090 --- 10.0.0.2 ping statistics --- 00:17:31.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.090 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:31.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:17:31.090 00:17:31.090 --- 10.0.0.1 ping statistics --- 00:17:31.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.090 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1618823 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1618823 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1618823 ']' 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.090 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.090 [2024-07-25 05:37:24.649741] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:17:31.090 [2024-07-25 05:37:24.649837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.090 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.090 [2024-07-25 05:37:24.718543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:31.348 [2024-07-25 05:37:24.814132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.349 [2024-07-25 05:37:24.814187] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.349 [2024-07-25 05:37:24.814212] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.349 [2024-07-25 05:37:24.814225] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.349 [2024-07-25 05:37:24.814237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.349 [2024-07-25 05:37:24.814316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.349 [2024-07-25 05:37:24.814374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.349 [2024-07-25 05:37:24.814430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.349 [2024-07-25 05:37:24.814432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.349 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:31.349 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:17:31.349 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.349 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:31.349 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.349 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.349 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:31.349 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.349 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.349 [2024-07-25 05:37:24.982887] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.349 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.349 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:31.349 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.349 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.349 Malloc0 00:17:31.349 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.349 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:31.349 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.349 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.349 Malloc1 00:17:31.349 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.349 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:31.349 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.349 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.349 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.349 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.349 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.349 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.607 [2024-07-25 05:37:25.068496] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:31.607 00:17:31.607 Discovery Log Number of Records 2, Generation counter 2 00:17:31.607 =====Discovery Log Entry 0====== 00:17:31.607 trtype: tcp 00:17:31.607 adrfam: ipv4 00:17:31.607 subtype: current discovery subsystem 00:17:31.607 treq: not required 00:17:31.607 portid: 0 00:17:31.607 trsvcid: 4420 00:17:31.607 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:31.607 traddr: 10.0.0.2 00:17:31.607 eflags: explicit discovery connections, duplicate discovery information 00:17:31.607 sectype: none 00:17:31.607 =====Discovery Log Entry 1====== 00:17:31.607 trtype: tcp 00:17:31.607 adrfam: ipv4 00:17:31.607 subtype: nvme subsystem 00:17:31.607 treq: not required 00:17:31.607 portid: 0 00:17:31.607 trsvcid: 4420 00:17:31.607 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:31.607 traddr: 10.0.0.2 00:17:31.607 eflags: none 00:17:31.607 sectype: none 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:31.607 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:32.173 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:32.173 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:32.173 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:32.173 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:32.173 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:32.173 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:34.697 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:34.698 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:34.698 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:34.698 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:34.698 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:34.698 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:34.698 /dev/nvme0n1 ]] 00:17:34.698 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:34.698 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:34.698 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:34.698 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:34.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:34.955 rmmod nvme_tcp 00:17:34.955 rmmod nvme_fabrics 00:17:34.955 rmmod nvme_keyring 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1618823 ']' 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1618823 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1618823 ']' 00:17:34.955 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1618823 00:17:34.956 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:17:34.956 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.956 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1618823 00:17:34.956 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:34.956 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:34.956 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1618823' 00:17:34.956 killing process with pid 1618823 00:17:34.956 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1618823 00:17:34.956 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1618823 00:17:35.215 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.215 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.215 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.215 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.215 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.215 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.215 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.215 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.117 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:37.375 00:17:37.375 real 0m8.487s 00:17:37.375 user 0m16.214s 00:17:37.375 sys 0m2.254s 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.375 ************************************ 00:17:37.375 END TEST nvmf_nvme_cli 00:17:37.375 ************************************ 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.375 ************************************ 00:17:37.375 START TEST nvmf_vfio_user 00:17:37.375 ************************************ 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:37.375 * Looking for test storage... 00:17:37.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.375 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1619650 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1619650' 00:17:37.376 Process pid: 1619650 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1619650 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1619650 ']' 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.376 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:37.376 [2024-07-25 05:37:30.993800] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:17:37.376 [2024-07-25 05:37:30.993896] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.376 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.376 [2024-07-25 05:37:31.052846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.635 [2024-07-25 05:37:31.139182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.635 [2024-07-25 05:37:31.139246] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.635 [2024-07-25 05:37:31.139276] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.635 [2024-07-25 05:37:31.139287] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.635 [2024-07-25 05:37:31.139305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.635 [2024-07-25 05:37:31.139374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.635 [2024-07-25 05:37:31.139432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.635 [2024-07-25 05:37:31.139499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.635 [2024-07-25 05:37:31.139502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.635 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.635 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:37.635 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:38.568 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:39.133 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:39.133 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:39.133 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:39.133 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:39.133 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:39.133 Malloc1 00:17:39.133 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:39.391 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:39.648 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:39.905 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:39.905 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:39.905 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:40.167 Malloc2 00:17:40.167 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:40.460 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:40.718 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:40.976 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:40.976 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:40.976 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:40.976 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:40.976 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:40.976 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:40.976 [2024-07-25 05:37:34.583216] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:17:40.976 [2024-07-25 05:37:34.583279] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1620160 ] 00:17:40.976 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.976 [2024-07-25 05:37:34.616376] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:40.976 [2024-07-25 05:37:34.624711] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:40.976 [2024-07-25 05:37:34.624738] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f59a2ba1000 00:17:40.976 [2024-07-25 05:37:34.625703] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:40.976 [2024-07-25 05:37:34.626697] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:40.976 [2024-07-25 05:37:34.627704] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:40.976 [2024-07-25 05:37:34.628710] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:40.976 [2024-07-25 05:37:34.629712] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:40.976 [2024-07-25 05:37:34.630715] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:40.976 [2024-07-25 05:37:34.631720] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:40.976 [2024-07-25 05:37:34.632725] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:40.976 [2024-07-25 05:37:34.633727] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:40.976 [2024-07-25 05:37:34.633747] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f59a1955000 00:17:40.977 [2024-07-25 05:37:34.634870] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:40.977 [2024-07-25 05:37:34.650888] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:40.977 [2024-07-25 05:37:34.650925] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:40.977 [2024-07-25 05:37:34.655877] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:40.977 [2024-07-25 05:37:34.655936] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:40.977 [2024-07-25 05:37:34.656042] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:40.977 [2024-07-25 05:37:34.656071] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:40.977 [2024-07-25 05:37:34.656091] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:40.977 [2024-07-25 05:37:34.656874] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:40.977 [2024-07-25 05:37:34.656904] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:40.977 [2024-07-25 05:37:34.656918] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:40.977 [2024-07-25 05:37:34.657876] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:40.977 [2024-07-25 05:37:34.657894] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:40.977 [2024-07-25 05:37:34.657907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:40.977 [2024-07-25 05:37:34.658879] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:40.977 [2024-07-25 05:37:34.658896] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:40.977 [2024-07-25 05:37:34.659885] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:40.977 [2024-07-25 05:37:34.659904] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:40.977 [2024-07-25 05:37:34.659913] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:40.977 [2024-07-25 05:37:34.659924] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:40.977 [2024-07-25 05:37:34.660034] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:40.977 [2024-07-25 05:37:34.660042] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:40.977 [2024-07-25 05:37:34.660051] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:40.977 [2024-07-25 05:37:34.660891] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:40.977 [2024-07-25 05:37:34.661894] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:40.977 [2024-07-25 05:37:34.662899] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:40.977 [2024-07-25 05:37:34.663894] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:40.977 [2024-07-25 05:37:34.664001] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:40.977 [2024-07-25 05:37:34.664914] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:40.977 [2024-07-25 05:37:34.664931] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:40.977 [2024-07-25 05:37:34.664940] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:40.977 [2024-07-25 05:37:34.664964] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:40.977 [2024-07-25 05:37:34.664977] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:40.977 [2024-07-25 05:37:34.665008] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:40.977 [2024-07-25 05:37:34.665018] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:40.977 [2024-07-25 05:37:34.665024] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.977 [2024-07-25 05:37:34.665044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:40.977 [2024-07-25 05:37:34.665092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:40.977 [2024-07-25 05:37:34.665110] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:40.977 [2024-07-25 05:37:34.665118] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:40.977 [2024-07-25 05:37:34.665125] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:40.977 [2024-07-25 05:37:34.665133] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:40.977 [2024-07-25 05:37:34.665141] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:40.977 [2024-07-25 05:37:34.665149] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:40.977 [2024-07-25 05:37:34.665156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:40.977 [2024-07-25 05:37:34.665170] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:40.977 [2024-07-25 05:37:34.665188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:40.977 [2024-07-25 05:37:34.665209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:40.977 [2024-07-25 05:37:34.665252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.977 [2024-07-25 05:37:34.665268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.977 [2024-07-25 05:37:34.665281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.977 [2024-07-25 05:37:34.665293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:40.977 [2024-07-25 05:37:34.665302] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:40.977 [2024-07-25 05:37:34.665321] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:40.977 [2024-07-25 05:37:34.665337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:40.977 [2024-07-25 05:37:34.665350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:40.977 [2024-07-25 05:37:34.665361] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:40.977 [2024-07-25 05:37:34.665370] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:40.977 [2024-07-25 05:37:34.665385] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:40.977 [2024-07-25 05:37:34.665400] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:40.977 [2024-07-25 05:37:34.665415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:40.977 [2024-07-25 05:37:34.665427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:40.977 [2024-07-25 05:37:34.665495] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:40.977 [2024-07-25 05:37:34.665512] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:40.977 [2024-07-25 05:37:34.665534] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:40.977 [2024-07-25 05:37:34.665557] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:40.977 [2024-07-25 05:37:34.665563] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.977 [2024-07-25 05:37:34.665573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:40.977 [2024-07-25 05:37:34.665587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:40.977 [2024-07-25 05:37:34.665606] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:40.977 [2024-07-25 05:37:34.665625] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:40.977 [2024-07-25 05:37:34.665640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:40.977 [2024-07-25 05:37:34.665652] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:40.977 [2024-07-25 05:37:34.665660] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:40.977 [2024-07-25 05:37:34.665666] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.977 [2024-07-25 05:37:34.665675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:40.977 [2024-07-25 05:37:34.665706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:40.977 [2024-07-25 05:37:34.665822] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:40.978 [2024-07-25 05:37:34.665840] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:40.978 [2024-07-25 05:37:34.665852] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:40.978 [2024-07-25 05:37:34.665861] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:40.978 [2024-07-25 05:37:34.665866] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.978 [2024-07-25 05:37:34.665876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:40.978 [2024-07-25 05:37:34.665887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:40.978 [2024-07-25 05:37:34.665900] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:40.978 [2024-07-25 05:37:34.665915] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:40.978 [2024-07-25 05:37:34.665928] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:40.978 [2024-07-25 05:37:34.665941] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:40.978 [2024-07-25 05:37:34.665950] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:40.978 [2024-07-25 05:37:34.665959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:40.978 [2024-07-25 05:37:34.665967] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:40.978 [2024-07-25 05:37:34.665974] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:40.978 [2024-07-25 05:37:34.665982] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:40.978 [2024-07-25 05:37:34.666009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:40.978 [2024-07-25 05:37:34.666028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:40.978 [2024-07-25 05:37:34.666046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:40.978 [2024-07-25 05:37:34.666059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:40.978 [2024-07-25 05:37:34.666075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:40.978 [2024-07-25 05:37:34.666086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:40.978 [2024-07-25 05:37:34.666102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:40.978 [2024-07-25 05:37:34.666114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:40.978 [2024-07-25 05:37:34.666136] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:40.978 [2024-07-25 05:37:34.666146] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:40.978 [2024-07-25 05:37:34.666152] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:40.978 [2024-07-25 05:37:34.666158] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:40.978 [2024-07-25 05:37:34.666164] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:40.978 [2024-07-25 05:37:34.666173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:40.978 [2024-07-25 05:37:34.666184] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:40.978 [2024-07-25 05:37:34.666192] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:40.978 [2024-07-25 05:37:34.666198] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.978 [2024-07-25 05:37:34.666207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:40.978 [2024-07-25 05:37:34.666218] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:40.978 [2024-07-25 05:37:34.666253] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:40.978 [2024-07-25 05:37:34.666261] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.978 [2024-07-25 05:37:34.666271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:40.978 [2024-07-25 05:37:34.666285] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:40.978 [2024-07-25 05:37:34.666294] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:40.978 [2024-07-25 05:37:34.666300] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:40.978 [2024-07-25 05:37:34.666309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:40.978 [2024-07-25 05:37:34.666321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:40.978 [2024-07-25 05:37:34.666341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:40.978 [2024-07-25 05:37:34.666362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:40.978 [2024-07-25 05:37:34.666374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:40.978 ===================================================== 00:17:40.978 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:40.978 ===================================================== 00:17:40.978 Controller Capabilities/Features 00:17:40.978 ================================ 00:17:40.978 Vendor ID: 4e58 00:17:40.978 Subsystem Vendor ID: 4e58 00:17:40.978 Serial Number: SPDK1 00:17:40.978 Model Number: SPDK bdev Controller 00:17:40.978 Firmware Version: 24.09 00:17:40.978 Recommended Arb Burst: 6 00:17:40.978 IEEE OUI Identifier: 8d 6b 50 00:17:40.978 Multi-path I/O 00:17:40.978 May have multiple subsystem ports: Yes 00:17:40.978 May have multiple controllers: Yes 00:17:40.978 Associated with SR-IOV VF: No 00:17:40.978 Max Data Transfer Size: 131072 00:17:40.978 Max Number of Namespaces: 32 00:17:40.978 Max Number of I/O Queues: 127 00:17:40.978 NVMe Specification Version (VS): 1.3 00:17:40.978 NVMe Specification Version (Identify): 1.3 00:17:40.978 Maximum Queue Entries: 256 00:17:40.978 Contiguous Queues Required: Yes 00:17:40.978 Arbitration Mechanisms Supported 00:17:40.978 Weighted Round Robin: Not Supported 00:17:40.978 Vendor Specific: Not Supported 00:17:40.978 Reset Timeout: 15000 ms 00:17:40.978 Doorbell Stride: 4 bytes 00:17:40.978 NVM Subsystem Reset: Not Supported 00:17:40.978 Command Sets Supported 00:17:40.978 NVM Command Set: Supported 00:17:40.978 Boot Partition: Not Supported 00:17:40.978 Memory Page Size Minimum: 4096 bytes 00:17:40.978 Memory Page Size Maximum: 4096 bytes 00:17:40.978 Persistent Memory Region: Not Supported 00:17:40.978 Optional Asynchronous Events Supported 00:17:40.978 Namespace Attribute Notices: Supported 00:17:40.978 Firmware Activation Notices: Not Supported 00:17:40.978 ANA Change Notices: Not Supported 00:17:40.978 PLE Aggregate Log Change Notices: Not Supported 00:17:40.978 LBA Status Info Alert Notices: Not Supported 00:17:40.978 EGE Aggregate Log Change Notices: Not Supported 00:17:40.978 Normal NVM Subsystem Shutdown event: Not Supported 00:17:40.978 Zone Descriptor Change Notices: Not Supported 00:17:40.978 Discovery Log Change Notices: Not Supported 00:17:40.978 Controller Attributes 00:17:40.978 128-bit Host Identifier: Supported 00:17:40.978 Non-Operational Permissive Mode: Not Supported 00:17:40.978 NVM Sets: Not Supported 00:17:40.978 Read Recovery Levels: Not Supported 00:17:40.978 Endurance Groups: Not Supported 00:17:40.978 Predictable Latency Mode: Not Supported 00:17:40.978 Traffic Based Keep ALive: Not Supported 00:17:40.978 Namespace Granularity: Not Supported 00:17:40.978 SQ Associations: Not Supported 00:17:40.978 UUID List: Not Supported 00:17:40.978 Multi-Domain Subsystem: Not Supported 00:17:40.978 Fixed Capacity Management: Not Supported 00:17:40.978 Variable Capacity Management: Not Supported 00:17:40.978 Delete Endurance Group: Not Supported 00:17:40.978 Delete NVM Set: Not Supported 00:17:40.978 Extended LBA Formats Supported: Not Supported 00:17:40.978 Flexible Data Placement Supported: Not Supported 00:17:40.978 00:17:40.978 Controller Memory Buffer Support 00:17:40.978 ================================ 00:17:40.978 Supported: No 00:17:40.978 00:17:40.978 Persistent Memory Region Support 00:17:40.978 ================================ 00:17:40.978 Supported: No 00:17:40.978 00:17:40.978 Admin Command Set Attributes 00:17:40.978 ============================ 00:17:40.978 Security Send/Receive: Not Supported 00:17:40.978 Format NVM: Not Supported 00:17:40.978 Firmware Activate/Download: Not Supported 00:17:40.978 Namespace Management: Not Supported 00:17:40.978 Device Self-Test: Not Supported 00:17:40.978 Directives: Not Supported 00:17:40.978 NVMe-MI: Not Supported 00:17:40.979 Virtualization Management: Not Supported 00:17:40.979 Doorbell Buffer Config: Not Supported 00:17:40.979 Get LBA Status Capability: Not Supported 00:17:40.979 Command & Feature Lockdown Capability: Not Supported 00:17:40.979 Abort Command Limit: 4 00:17:40.979 Async Event Request Limit: 4 00:17:40.979 Number of Firmware Slots: N/A 00:17:40.979 Firmware Slot 1 Read-Only: N/A 00:17:40.979 Firmware Activation Without Reset: N/A 00:17:40.979 Multiple Update Detection Support: N/A 00:17:40.979 Firmware Update Granularity: No Information Provided 00:17:40.979 Per-Namespace SMART Log: No 00:17:40.979 Asymmetric Namespace Access Log Page: Not Supported 00:17:40.979 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:40.979 Command Effects Log Page: Supported 00:17:40.979 Get Log Page Extended Data: Supported 00:17:40.979 Telemetry Log Pages: Not Supported 00:17:40.979 Persistent Event Log Pages: Not Supported 00:17:40.979 Supported Log Pages Log Page: May Support 00:17:40.979 Commands Supported & Effects Log Page: Not Supported 00:17:40.979 Feature Identifiers & Effects Log Page:May Support 00:17:40.979 NVMe-MI Commands & Effects Log Page: May Support 00:17:40.979 Data Area 4 for Telemetry Log: Not Supported 00:17:40.979 Error Log Page Entries Supported: 128 00:17:40.979 Keep Alive: Supported 00:17:40.979 Keep Alive Granularity: 10000 ms 00:17:40.979 00:17:40.979 NVM Command Set Attributes 00:17:40.979 ========================== 00:17:40.979 Submission Queue Entry Size 00:17:40.979 Max: 64 00:17:40.979 Min: 64 00:17:40.979 Completion Queue Entry Size 00:17:40.979 Max: 16 00:17:40.979 Min: 16 00:17:40.979 Number of Namespaces: 32 00:17:40.979 Compare Command: Supported 00:17:40.979 Write Uncorrectable Command: Not Supported 00:17:40.979 Dataset Management Command: Supported 00:17:40.979 Write Zeroes Command: Supported 00:17:40.979 Set Features Save Field: Not Supported 00:17:40.979 Reservations: Not Supported 00:17:40.979 Timestamp: Not Supported 00:17:40.979 Copy: Supported 00:17:40.979 Volatile Write Cache: Present 00:17:40.979 Atomic Write Unit (Normal): 1 00:17:40.979 Atomic Write Unit (PFail): 1 00:17:40.979 Atomic Compare & Write Unit: 1 00:17:40.979 Fused Compare & Write: Supported 00:17:40.979 Scatter-Gather List 00:17:40.979 SGL Command Set: Supported (Dword aligned) 00:17:40.979 SGL Keyed: Not Supported 00:17:40.979 SGL Bit Bucket Descriptor: Not Supported 00:17:40.979 SGL Metadata Pointer: Not Supported 00:17:40.979 Oversized SGL: Not Supported 00:17:40.979 SGL Metadata Address: Not Supported 00:17:40.979 SGL Offset: Not Supported 00:17:40.979 Transport SGL Data Block: Not Supported 00:17:40.979 Replay Protected Memory Block: Not Supported 00:17:40.979 00:17:40.979 Firmware Slot Information 00:17:40.979 ========================= 00:17:40.979 Active slot: 1 00:17:40.979 Slot 1 Firmware Revision: 24.09 00:17:40.979 00:17:40.979 00:17:40.979 Commands Supported and Effects 00:17:40.979 ============================== 00:17:40.979 Admin Commands 00:17:40.979 -------------- 00:17:40.979 Get Log Page (02h): Supported 00:17:40.979 Identify (06h): Supported 00:17:40.979 Abort (08h): Supported 00:17:40.979 Set Features (09h): Supported 00:17:40.979 Get Features (0Ah): Supported 00:17:40.979 Asynchronous Event Request (0Ch): Supported 00:17:40.979 Keep Alive (18h): Supported 00:17:40.979 I/O Commands 00:17:40.979 ------------ 00:17:40.979 Flush (00h): Supported LBA-Change 00:17:40.979 Write (01h): Supported LBA-Change 00:17:40.979 Read (02h): Supported 00:17:40.979 Compare (05h): Supported 00:17:40.979 Write Zeroes (08h): Supported LBA-Change 00:17:40.979 Dataset Management (09h): Supported LBA-Change 00:17:40.979 Copy (19h): Supported LBA-Change 00:17:40.979 00:17:40.979 Error Log 00:17:40.979 ========= 00:17:40.979 00:17:40.979 Arbitration 00:17:40.979 =========== 00:17:40.979 Arbitration Burst: 1 00:17:40.979 00:17:40.979 Power Management 00:17:40.979 ================ 00:17:40.979 Number of Power States: 1 00:17:40.979 Current Power State: Power State #0 00:17:40.979 Power State #0: 00:17:40.979 Max Power: 0.00 W 00:17:40.979 Non-Operational State: Operational 00:17:40.979 Entry Latency: Not Reported 00:17:40.979 Exit Latency: Not Reported 00:17:40.979 Relative Read Throughput: 0 00:17:40.979 Relative Read Latency: 0 00:17:40.979 Relative Write Throughput: 0 00:17:40.979 Relative Write Latency: 0 00:17:40.979 Idle Power: Not Reported 00:17:40.979 Active Power: Not Reported 00:17:40.979 Non-Operational Permissive Mode: Not Supported 00:17:40.979 00:17:40.979 Health Information 00:17:40.979 ================== 00:17:40.979 Critical Warnings: 00:17:40.979 Available Spare Space: OK 00:17:40.979 Temperature: OK 00:17:40.979 Device Reliability: OK 00:17:40.979 Read Only: No 00:17:40.979 Volatile Memory Backup: OK 00:17:40.979 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:40.979 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:40.979 Available Spare: 0% 00:17:40.979 Available Sp[2024-07-25 05:37:34.666492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:40.979 [2024-07-25 05:37:34.666509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:40.979 [2024-07-25 05:37:34.666576] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:40.979 [2024-07-25 05:37:34.666610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.979 [2024-07-25 05:37:34.666622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.979 [2024-07-25 05:37:34.666631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.979 [2024-07-25 05:37:34.666641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:40.979 [2024-07-25 05:37:34.670253] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:40.979 [2024-07-25 05:37:34.670276] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:40.979 [2024-07-25 05:37:34.670938] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:40.979 [2024-07-25 05:37:34.671022] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:40.979 [2024-07-25 05:37:34.671036] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:40.979 [2024-07-25 05:37:34.671949] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:40.979 [2024-07-25 05:37:34.671972] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:40.979 [2024-07-25 05:37:34.672027] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:40.979 [2024-07-25 05:37:34.674000] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:41.236 are Threshold: 0% 00:17:41.237 Life Percentage Used: 0% 00:17:41.237 Data Units Read: 0 00:17:41.237 Data Units Written: 0 00:17:41.237 Host Read Commands: 0 00:17:41.237 Host Write Commands: 0 00:17:41.237 Controller Busy Time: 0 minutes 00:17:41.237 Power Cycles: 0 00:17:41.237 Power On Hours: 0 hours 00:17:41.237 Unsafe Shutdowns: 0 00:17:41.237 Unrecoverable Media Errors: 0 00:17:41.237 Lifetime Error Log Entries: 0 00:17:41.237 Warning Temperature Time: 0 minutes 00:17:41.237 Critical Temperature Time: 0 minutes 00:17:41.237 00:17:41.237 Number of Queues 00:17:41.237 ================ 00:17:41.237 Number of I/O Submission Queues: 127 00:17:41.237 Number of I/O Completion Queues: 127 00:17:41.237 00:17:41.237 Active Namespaces 00:17:41.237 ================= 00:17:41.237 Namespace ID:1 00:17:41.237 Error Recovery Timeout: Unlimited 00:17:41.237 Command Set Identifier: NVM (00h) 00:17:41.237 Deallocate: Supported 00:17:41.237 Deallocated/Unwritten Error: Not Supported 00:17:41.237 Deallocated Read Value: Unknown 00:17:41.237 Deallocate in Write Zeroes: Not Supported 00:17:41.237 Deallocated Guard Field: 0xFFFF 00:17:41.237 Flush: Supported 00:17:41.237 Reservation: Supported 00:17:41.237 Namespace Sharing Capabilities: Multiple Controllers 00:17:41.237 Size (in LBAs): 131072 (0GiB) 00:17:41.237 Capacity (in LBAs): 131072 (0GiB) 00:17:41.237 Utilization (in LBAs): 131072 (0GiB) 00:17:41.237 NGUID: 6EBAACFD7B724A1D9BAC3E8514D48ECB 00:17:41.237 UUID: 6ebaacfd-7b72-4a1d-9bac-3e8514d48ecb 00:17:41.237 Thin Provisioning: Not Supported 00:17:41.237 Per-NS Atomic Units: Yes 00:17:41.237 Atomic Boundary Size (Normal): 0 00:17:41.237 Atomic Boundary Size (PFail): 0 00:17:41.237 Atomic Boundary Offset: 0 00:17:41.237 Maximum Single Source Range Length: 65535 00:17:41.237 Maximum Copy Length: 65535 00:17:41.237 Maximum Source Range Count: 1 00:17:41.237 NGUID/EUI64 Never Reused: No 00:17:41.237 Namespace Write Protected: No 00:17:41.237 Number of LBA Formats: 1 00:17:41.237 Current LBA Format: LBA Format #00 00:17:41.237 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:41.237 00:17:41.237 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:41.237 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.237 [2024-07-25 05:37:34.904096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:46.493 Initializing NVMe Controllers 00:17:46.493 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:46.493 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:46.493 Initialization complete. Launching workers. 00:17:46.493 ======================================================== 00:17:46.493 Latency(us) 00:17:46.493 Device Information : IOPS MiB/s Average min max 00:17:46.493 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35015.90 136.78 3654.86 1190.45 7630.84 00:17:46.493 ======================================================== 00:17:46.493 Total : 35015.90 136.78 3654.86 1190.45 7630.84 00:17:46.493 00:17:46.493 [2024-07-25 05:37:39.926400] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:46.493 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:46.493 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.493 [2024-07-25 05:37:40.180681] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:51.753 Initializing NVMe Controllers 00:17:51.753 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:51.753 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:51.753 Initialization complete. Launching workers. 00:17:51.753 ======================================================== 00:17:51.753 Latency(us) 00:17:51.753 Device Information : IOPS MiB/s Average min max 00:17:51.753 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8009.91 4986.09 15986.47 00:17:51.753 ======================================================== 00:17:51.753 Total : 16000.00 62.50 8009.91 4986.09 15986.47 00:17:51.753 00:17:51.753 [2024-07-25 05:37:45.217282] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:51.753 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:51.753 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.753 [2024-07-25 05:37:45.426311] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:57.015 [2024-07-25 05:37:50.516657] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:57.015 Initializing NVMe Controllers 00:17:57.015 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:57.015 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:57.015 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:57.015 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:57.015 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:57.015 Initialization complete. Launching workers. 00:17:57.015 Starting thread on core 2 00:17:57.015 Starting thread on core 3 00:17:57.015 Starting thread on core 1 00:17:57.015 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:57.015 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.274 [2024-07-25 05:37:50.809331] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:00.554 [2024-07-25 05:37:53.872571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:00.554 Initializing NVMe Controllers 00:18:00.554 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:00.554 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:00.554 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:00.554 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:00.554 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:00.554 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:00.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:00.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:00.554 Initialization complete. Launching workers. 00:18:00.554 Starting thread on core 1 with urgent priority queue 00:18:00.554 Starting thread on core 2 with urgent priority queue 00:18:00.554 Starting thread on core 3 with urgent priority queue 00:18:00.554 Starting thread on core 0 with urgent priority queue 00:18:00.554 SPDK bdev Controller (SPDK1 ) core 0: 6613.00 IO/s 15.12 secs/100000 ios 00:18:00.554 SPDK bdev Controller (SPDK1 ) core 1: 6057.00 IO/s 16.51 secs/100000 ios 00:18:00.554 SPDK bdev Controller (SPDK1 ) core 2: 4831.67 IO/s 20.70 secs/100000 ios 00:18:00.554 SPDK bdev Controller (SPDK1 ) core 3: 6111.00 IO/s 16.36 secs/100000 ios 00:18:00.554 ======================================================== 00:18:00.554 00:18:00.554 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:00.554 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.555 [2024-07-25 05:37:54.172772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:00.555 Initializing NVMe Controllers 00:18:00.555 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:00.555 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:00.555 Namespace ID: 1 size: 0GB 00:18:00.555 Initialization complete. 00:18:00.555 INFO: using host memory buffer for IO 00:18:00.555 Hello world! 00:18:00.555 [2024-07-25 05:37:54.206368] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:00.555 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:00.812 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.812 [2024-07-25 05:37:54.500719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:02.184 Initializing NVMe Controllers 00:18:02.184 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:02.184 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:02.184 Initialization complete. Launching workers. 00:18:02.184 submit (in ns) avg, min, max = 5719.9, 3520.0, 4013971.1 00:18:02.184 complete (in ns) avg, min, max = 26848.9, 2078.9, 4997064.4 00:18:02.184 00:18:02.184 Submit histogram 00:18:02.184 ================ 00:18:02.184 Range in us Cumulative Count 00:18:02.184 3.508 - 3.532: 0.2520% ( 34) 00:18:02.184 3.532 - 3.556: 1.5417% ( 174) 00:18:02.184 3.556 - 3.579: 4.2988% ( 372) 00:18:02.184 3.579 - 3.603: 10.3691% ( 819) 00:18:02.184 3.603 - 3.627: 19.6709% ( 1255) 00:18:02.184 3.627 - 3.650: 30.8998% ( 1515) 00:18:02.184 3.650 - 3.674: 38.8156% ( 1068) 00:18:02.184 3.674 - 3.698: 45.3009% ( 875) 00:18:02.184 3.698 - 3.721: 51.4601% ( 831) 00:18:02.184 3.721 - 3.745: 57.6193% ( 831) 00:18:02.184 3.745 - 3.769: 62.8891% ( 711) 00:18:02.184 3.769 - 3.793: 66.7729% ( 524) 00:18:02.184 3.793 - 3.816: 69.7747% ( 405) 00:18:02.184 3.816 - 3.840: 72.7616% ( 403) 00:18:02.184 3.840 - 3.864: 76.4453% ( 497) 00:18:02.184 3.864 - 3.887: 79.9733% ( 476) 00:18:02.184 3.887 - 3.911: 83.1752% ( 432) 00:18:02.184 3.911 - 3.935: 85.7249% ( 344) 00:18:02.184 3.935 - 3.959: 87.6297% ( 257) 00:18:02.184 3.959 - 3.982: 89.4382% ( 244) 00:18:02.184 3.982 - 4.006: 91.1281% ( 228) 00:18:02.184 4.006 - 4.030: 92.3214% ( 161) 00:18:02.184 4.030 - 4.053: 93.2256% ( 122) 00:18:02.184 4.053 - 4.077: 94.1891% ( 130) 00:18:02.184 4.077 - 4.101: 94.8192% ( 85) 00:18:02.184 4.101 - 4.124: 95.2935% ( 64) 00:18:02.184 4.124 - 4.148: 95.6641% ( 50) 00:18:02.184 4.148 - 4.172: 95.9606% ( 40) 00:18:02.184 4.172 - 4.196: 96.1459% ( 25) 00:18:02.184 4.196 - 4.219: 96.3979% ( 34) 00:18:02.184 4.219 - 4.243: 96.5461% ( 20) 00:18:02.184 4.243 - 4.267: 96.6573% ( 15) 00:18:02.184 4.267 - 4.290: 96.8055% ( 20) 00:18:02.184 4.290 - 4.314: 96.9167% ( 15) 00:18:02.184 4.314 - 4.338: 97.0205% ( 14) 00:18:02.184 4.338 - 4.361: 97.1168% ( 13) 00:18:02.184 4.361 - 4.385: 97.1835% ( 9) 00:18:02.184 4.385 - 4.409: 97.2206% ( 5) 00:18:02.184 4.409 - 4.433: 97.2725% ( 7) 00:18:02.184 4.433 - 4.456: 97.3095% ( 5) 00:18:02.184 4.456 - 4.480: 97.3169% ( 1) 00:18:02.184 4.480 - 4.504: 97.3318% ( 2) 00:18:02.184 4.504 - 4.527: 97.3392% ( 1) 00:18:02.184 4.551 - 4.575: 97.3466% ( 1) 00:18:02.184 4.670 - 4.693: 97.3762% ( 4) 00:18:02.184 4.693 - 4.717: 97.4133% ( 5) 00:18:02.184 4.717 - 4.741: 97.4429% ( 4) 00:18:02.184 4.741 - 4.764: 97.5170% ( 10) 00:18:02.184 4.764 - 4.788: 97.5541% ( 5) 00:18:02.184 4.788 - 4.812: 97.6505% ( 13) 00:18:02.184 4.812 - 4.836: 97.6949% ( 6) 00:18:02.184 4.836 - 4.859: 97.7765% ( 11) 00:18:02.184 4.859 - 4.883: 97.8432% ( 9) 00:18:02.184 4.883 - 4.907: 97.8876% ( 6) 00:18:02.184 4.907 - 4.930: 97.9247% ( 5) 00:18:02.184 4.930 - 4.954: 97.9618% ( 5) 00:18:02.184 4.954 - 4.978: 97.9914% ( 4) 00:18:02.184 4.978 - 5.001: 98.0062% ( 2) 00:18:02.184 5.001 - 5.025: 98.0359% ( 4) 00:18:02.184 5.025 - 5.049: 98.0507% ( 2) 00:18:02.184 5.049 - 5.073: 98.0655% ( 2) 00:18:02.184 5.073 - 5.096: 98.1026% ( 5) 00:18:02.184 5.096 - 5.120: 98.1248% ( 3) 00:18:02.184 5.120 - 5.144: 98.1396% ( 2) 00:18:02.184 5.144 - 5.167: 98.1471% ( 1) 00:18:02.184 5.167 - 5.191: 98.1619% ( 2) 00:18:02.184 5.191 - 5.215: 98.1693% ( 1) 00:18:02.184 5.215 - 5.239: 98.1841% ( 2) 00:18:02.184 5.239 - 5.262: 98.1989% ( 2) 00:18:02.184 5.262 - 5.286: 98.2063% ( 1) 00:18:02.184 5.310 - 5.333: 98.2138% ( 1) 00:18:02.184 5.381 - 5.404: 98.2212% ( 1) 00:18:02.184 5.404 - 5.428: 98.2286% ( 1) 00:18:02.184 5.476 - 5.499: 98.2360% ( 1) 00:18:02.184 5.760 - 5.784: 98.2434% ( 1) 00:18:02.184 5.784 - 5.807: 98.2508% ( 1) 00:18:02.184 5.831 - 5.855: 98.2582% ( 1) 00:18:02.185 5.879 - 5.902: 98.2656% ( 1) 00:18:02.185 5.902 - 5.926: 98.2731% ( 1) 00:18:02.185 6.116 - 6.163: 98.2805% ( 1) 00:18:02.185 6.400 - 6.447: 98.2879% ( 1) 00:18:02.185 6.684 - 6.732: 98.2953% ( 1) 00:18:02.185 6.732 - 6.779: 98.3027% ( 1) 00:18:02.185 6.827 - 6.874: 98.3101% ( 1) 00:18:02.185 6.921 - 6.969: 98.3323% ( 3) 00:18:02.185 7.016 - 7.064: 98.3472% ( 2) 00:18:02.185 7.111 - 7.159: 98.3620% ( 2) 00:18:02.185 7.159 - 7.206: 98.3694% ( 1) 00:18:02.185 7.206 - 7.253: 98.3768% ( 1) 00:18:02.185 7.301 - 7.348: 98.3916% ( 2) 00:18:02.185 7.348 - 7.396: 98.4065% ( 2) 00:18:02.185 7.443 - 7.490: 98.4139% ( 1) 00:18:02.185 7.538 - 7.585: 98.4213% ( 1) 00:18:02.185 7.585 - 7.633: 98.4287% ( 1) 00:18:02.185 7.633 - 7.680: 98.4583% ( 4) 00:18:02.185 7.680 - 7.727: 98.4806% ( 3) 00:18:02.185 7.727 - 7.775: 98.4880% ( 1) 00:18:02.185 7.775 - 7.822: 98.5028% ( 2) 00:18:02.185 7.870 - 7.917: 98.5102% ( 1) 00:18:02.185 7.917 - 7.964: 98.5325% ( 3) 00:18:02.185 8.012 - 8.059: 98.5399% ( 1) 00:18:02.185 8.059 - 8.107: 98.5621% ( 3) 00:18:02.185 8.107 - 8.154: 98.5769% ( 2) 00:18:02.185 8.154 - 8.201: 98.5843% ( 1) 00:18:02.185 8.201 - 8.249: 98.6140% ( 4) 00:18:02.185 8.249 - 8.296: 98.6214% ( 1) 00:18:02.185 8.296 - 8.344: 98.6436% ( 3) 00:18:02.185 8.344 - 8.391: 98.6511% ( 1) 00:18:02.185 8.391 - 8.439: 98.6659% ( 2) 00:18:02.185 8.439 - 8.486: 98.6733% ( 1) 00:18:02.185 8.486 - 8.533: 98.6881% ( 2) 00:18:02.185 8.533 - 8.581: 98.7029% ( 2) 00:18:02.185 8.581 - 8.628: 98.7178% ( 2) 00:18:02.185 8.676 - 8.723: 98.7252% ( 1) 00:18:02.185 8.723 - 8.770: 98.7474% ( 3) 00:18:02.185 8.770 - 8.818: 98.7548% ( 1) 00:18:02.185 8.865 - 8.913: 98.7622% ( 1) 00:18:02.185 8.913 - 8.960: 98.7696% ( 1) 00:18:02.185 9.007 - 9.055: 98.7771% ( 1) 00:18:02.185 9.197 - 9.244: 98.7845% ( 1) 00:18:02.185 9.244 - 9.292: 98.7919% ( 1) 00:18:02.185 9.339 - 9.387: 98.8067% ( 2) 00:18:02.185 9.671 - 9.719: 98.8141% ( 1) 00:18:02.185 10.193 - 10.240: 98.8215% ( 1) 00:18:02.185 10.287 - 10.335: 98.8363% ( 2) 00:18:02.185 10.335 - 10.382: 98.8438% ( 1) 00:18:02.185 10.856 - 10.904: 98.8512% ( 1) 00:18:02.185 10.904 - 10.951: 98.8586% ( 1) 00:18:02.185 11.046 - 11.093: 98.8734% ( 2) 00:18:02.185 11.093 - 11.141: 98.8808% ( 1) 00:18:02.185 11.141 - 11.188: 98.8882% ( 1) 00:18:02.185 11.378 - 11.425: 98.8956% ( 1) 00:18:02.185 11.425 - 11.473: 98.9031% ( 1) 00:18:02.185 11.520 - 11.567: 98.9105% ( 1) 00:18:02.185 11.615 - 11.662: 98.9179% ( 1) 00:18:02.185 11.994 - 12.041: 98.9253% ( 1) 00:18:02.185 12.041 - 12.089: 98.9327% ( 1) 00:18:02.185 12.136 - 12.231: 98.9401% ( 1) 00:18:02.185 12.231 - 12.326: 98.9475% ( 1) 00:18:02.185 12.326 - 12.421: 98.9549% ( 1) 00:18:02.185 12.705 - 12.800: 98.9698% ( 2) 00:18:02.185 12.990 - 13.084: 98.9772% ( 1) 00:18:02.185 13.084 - 13.179: 98.9846% ( 1) 00:18:02.185 13.369 - 13.464: 98.9920% ( 1) 00:18:02.185 13.559 - 13.653: 98.9994% ( 1) 00:18:02.185 13.653 - 13.748: 99.0068% ( 1) 00:18:02.185 13.938 - 14.033: 99.0142% ( 1) 00:18:02.185 14.507 - 14.601: 99.0216% ( 1) 00:18:02.185 14.791 - 14.886: 99.0291% ( 1) 00:18:02.185 14.886 - 14.981: 99.0365% ( 1) 00:18:02.185 15.360 - 15.455: 99.0439% ( 1) 00:18:02.185 15.644 - 15.739: 99.0513% ( 1) 00:18:02.185 16.972 - 17.067: 99.0587% ( 1) 00:18:02.185 17.161 - 17.256: 99.0809% ( 3) 00:18:02.185 17.256 - 17.351: 99.0883% ( 1) 00:18:02.185 17.351 - 17.446: 99.1254% ( 5) 00:18:02.185 17.446 - 17.541: 99.1476% ( 3) 00:18:02.185 17.541 - 17.636: 99.1699% ( 3) 00:18:02.185 17.636 - 17.730: 99.2218% ( 7) 00:18:02.185 17.730 - 17.825: 99.2588% ( 5) 00:18:02.185 17.825 - 17.920: 99.2959% ( 5) 00:18:02.185 17.920 - 18.015: 99.3552% ( 8) 00:18:02.185 18.015 - 18.110: 99.4071% ( 7) 00:18:02.185 18.110 - 18.204: 99.5034% ( 13) 00:18:02.185 18.204 - 18.299: 99.5405% ( 5) 00:18:02.185 18.299 - 18.394: 99.6072% ( 9) 00:18:02.185 18.394 - 18.489: 99.6739% ( 9) 00:18:02.185 18.489 - 18.584: 99.7406% ( 9) 00:18:02.185 18.584 - 18.679: 99.7702% ( 4) 00:18:02.185 18.679 - 18.773: 99.8147% ( 6) 00:18:02.185 18.773 - 18.868: 99.8369% ( 3) 00:18:02.185 18.868 - 18.963: 99.8444% ( 1) 00:18:02.185 18.963 - 19.058: 99.8518% ( 1) 00:18:02.185 19.153 - 19.247: 99.8592% ( 1) 00:18:02.185 19.247 - 19.342: 99.8666% ( 1) 00:18:02.185 19.437 - 19.532: 99.8740% ( 1) 00:18:02.185 20.006 - 20.101: 99.8814% ( 1) 00:18:02.185 20.575 - 20.670: 99.8888% ( 1) 00:18:02.185 22.850 - 22.945: 99.8962% ( 1) 00:18:02.185 23.135 - 23.230: 99.9036% ( 1) 00:18:02.185 25.790 - 25.979: 99.9111% ( 1) 00:18:02.185 26.359 - 26.548: 99.9185% ( 1) 00:18:02.185 27.496 - 27.686: 99.9259% ( 1) 00:18:02.185 28.255 - 28.444: 99.9333% ( 1) 00:18:02.185 31.099 - 31.289: 99.9407% ( 1) 00:18:02.185 33.944 - 34.133: 99.9481% ( 1) 00:18:02.185 34.323 - 34.513: 99.9555% ( 1) 00:18:02.185 3980.705 - 4004.978: 99.9926% ( 5) 00:18:02.185 4004.978 - 4029.250: 100.0000% ( 1) 00:18:02.185 00:18:02.185 Complete histogram 00:18:02.185 ================== 00:18:02.185 Range in us Cumulative Count 00:18:02.185 2.074 - 2.086: 1.4231% ( 192) 00:18:02.185 2.086 - 2.098: 20.7901% ( 2613) 00:18:02.185 2.098 - 2.110: 29.9437% ( 1235) 00:18:02.185 2.110 - 2.121: 36.6884% ( 910) 00:18:02.185 2.121 - 2.133: 55.1957% ( 2497) 00:18:02.185 2.133 - 2.145: 59.8429% ( 627) 00:18:02.185 2.145 - 2.157: 63.4450% ( 486) 00:18:02.185 2.157 - 2.169: 72.0501% ( 1161) 00:18:02.185 2.169 - 2.181: 74.2588% ( 298) 00:18:02.185 2.181 - 2.193: 78.2983% ( 545) 00:18:02.185 2.193 - 2.204: 85.9250% ( 1029) 00:18:02.185 2.204 - 2.216: 87.4222% ( 202) 00:18:02.185 2.216 - 2.228: 88.5562% ( 153) 00:18:02.185 2.228 - 2.240: 90.2164% ( 224) 00:18:02.185 2.240 - 2.252: 92.2176% ( 270) 00:18:02.185 2.252 - 2.264: 93.2997% ( 146) 00:18:02.185 2.264 - 2.276: 94.4263% ( 152) 00:18:02.185 2.276 - 2.287: 94.7599% ( 45) 00:18:02.185 2.287 - 2.299: 94.9303% ( 23) 00:18:02.185 2.299 - 2.311: 95.1675% ( 32) 00:18:02.185 2.311 - 2.323: 95.7012% ( 72) 00:18:02.185 2.323 - 2.335: 95.9161% ( 29) 00:18:02.185 2.335 - 2.347: 95.9606% ( 6) 00:18:02.185 2.347 - 2.359: 95.9976% ( 5) 00:18:02.185 2.359 - 2.370: 96.0495% ( 7) 00:18:02.185 2.370 - 2.382: 96.1088% ( 8) 00:18:02.185 2.382 - 2.394: 96.3089% ( 27) 00:18:02.185 2.394 - 2.406: 96.5906% ( 38) 00:18:02.185 2.406 - 2.418: 96.7833% ( 26) 00:18:02.185 2.418 - 2.430: 97.0279% ( 33) 00:18:02.185 2.430 - 2.441: 97.3392% ( 42) 00:18:02.185 2.441 - 2.453: 97.5615% ( 30) 00:18:02.185 2.453 - 2.465: 97.7839% ( 30) 00:18:02.185 2.465 - 2.477: 97.9247% ( 19) 00:18:02.185 2.477 - 2.489: 98.0581% ( 18) 00:18:02.185 2.489 - 2.501: 98.1693% ( 15) 00:18:02.185 2.501 - 2.513: 98.2286% ( 8) 00:18:02.185 2.513 - 2.524: 98.2879% ( 8) 00:18:02.185 2.524 - 2.536: 98.3323% ( 6) 00:18:02.185 2.536 - 2.548: 98.3472% ( 2) 00:18:02.185 2.548 - 2.560: 98.3694% ( 3) 00:18:02.185 2.560 - 2.572: 98.4065% ( 5) 00:18:02.185 2.572 - 2.584: 98.4213% ( 2) 00:18:02.185 2.643 - 2.655: 98.4287% ( 1) 00:18:02.185 2.702 - 2.714: 98.4361% ( 1) 00:18:02.185 2.726 - 2.738: 98.4435% ( 1) 00:18:02.185 2.738 - 2.750: 98.4509% ( 1) 00:18:02.185 3.224 - 3.247: 98.4583% ( 1) 00:18:02.185 3.247 - 3.271: 98.4658% ( 1) 00:18:02.185 3.271 - 3.295: 98.4732% ( 1) 00:18:02.185 3.295 - 3.319: 98.4806% ( 1) 00:18:02.185 3.319 - 3.342: 98.4880% ( 1) 00:18:02.185 3.342 - 3.366: 98.5102% ( 3) 00:18:02.185 3.366 - 3.390: 98.5176% ( 1) 00:18:02.185 3.390 - 3.413: 98.5251% ( 1) 00:18:02.185 3.413 - 3.437: 98.5399% ( 2) 00:18:02.185 3.437 - 3.461: 98.5473% ( 1) 00:18:02.185 3.461 - 3.484: 98.5547% ( 1) 00:18:02.185 3.484 - 3.508: 98.5918% ( 5) 00:18:02.185 3.508 - 3.532: 98.5992% ( 1) 00:18:02.185 3.556 - 3.579: 98.6066% ( 1) 00:18:02.185 3.603 - 3.627: 98.6140% ( 1) 00:18:02.185 3.650 - 3.674: 9[2024-07-25 05:37:55.523935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:02.185 8.6362% ( 3) 00:18:02.185 3.674 - 3.698: 98.6436% ( 1) 00:18:02.185 3.698 - 3.721: 98.6511% ( 1) 00:18:02.185 3.721 - 3.745: 98.6585% ( 1) 00:18:02.186 3.745 - 3.769: 98.6659% ( 1) 00:18:02.186 3.769 - 3.793: 98.6733% ( 1) 00:18:02.186 3.816 - 3.840: 98.6807% ( 1) 00:18:02.186 3.959 - 3.982: 98.6881% ( 1) 00:18:02.186 4.290 - 4.314: 98.6955% ( 1) 00:18:02.186 5.286 - 5.310: 98.7029% ( 1) 00:18:02.186 5.547 - 5.570: 98.7103% ( 1) 00:18:02.186 5.665 - 5.689: 98.7178% ( 1) 00:18:02.186 5.760 - 5.784: 98.7252% ( 1) 00:18:02.186 5.831 - 5.855: 98.7326% ( 1) 00:18:02.186 5.973 - 5.997: 98.7400% ( 1) 00:18:02.186 5.997 - 6.021: 98.7474% ( 1) 00:18:02.186 6.210 - 6.258: 98.7548% ( 1) 00:18:02.186 6.258 - 6.305: 98.7622% ( 1) 00:18:02.186 6.400 - 6.447: 98.7696% ( 1) 00:18:02.186 6.447 - 6.495: 98.7771% ( 1) 00:18:02.186 6.590 - 6.637: 98.7845% ( 1) 00:18:02.186 6.874 - 6.921: 98.7919% ( 1) 00:18:02.186 7.633 - 7.680: 98.8067% ( 2) 00:18:02.186 7.775 - 7.822: 98.8141% ( 1) 00:18:02.186 7.964 - 8.012: 98.8215% ( 1) 00:18:02.186 8.581 - 8.628: 98.8289% ( 1) 00:18:02.186 11.520 - 11.567: 98.8363% ( 1) 00:18:02.186 13.559 - 13.653: 98.8438% ( 1) 00:18:02.186 13.653 - 13.748: 98.8512% ( 1) 00:18:02.186 15.455 - 15.550: 98.8660% ( 2) 00:18:02.186 15.550 - 15.644: 98.8808% ( 2) 00:18:02.186 15.644 - 15.739: 98.8882% ( 1) 00:18:02.186 15.739 - 15.834: 98.8956% ( 1) 00:18:02.186 15.834 - 15.929: 98.9179% ( 3) 00:18:02.186 15.929 - 16.024: 98.9475% ( 4) 00:18:02.186 16.024 - 16.119: 98.9994% ( 7) 00:18:02.186 16.119 - 16.213: 99.0142% ( 2) 00:18:02.186 16.213 - 16.308: 99.0439% ( 4) 00:18:02.186 16.308 - 16.403: 99.0735% ( 4) 00:18:02.186 16.403 - 16.498: 99.1032% ( 4) 00:18:02.186 16.498 - 16.593: 99.1328% ( 4) 00:18:02.186 16.593 - 16.687: 99.1551% ( 3) 00:18:02.186 16.687 - 16.782: 99.2143% ( 8) 00:18:02.186 16.782 - 16.877: 99.2440% ( 4) 00:18:02.186 16.877 - 16.972: 99.2662% ( 3) 00:18:02.186 16.972 - 17.067: 99.2885% ( 3) 00:18:02.186 17.067 - 17.161: 99.3181% ( 4) 00:18:02.186 17.161 - 17.256: 99.3403% ( 3) 00:18:02.186 17.446 - 17.541: 99.3478% ( 1) 00:18:02.186 17.541 - 17.636: 99.3626% ( 2) 00:18:02.186 18.299 - 18.394: 99.3700% ( 1) 00:18:02.186 18.584 - 18.679: 99.3774% ( 1) 00:18:02.186 2026.761 - 2038.898: 99.3848% ( 1) 00:18:02.186 2730.667 - 2742.803: 99.3922% ( 1) 00:18:02.186 3009.801 - 3021.938: 99.3996% ( 1) 00:18:02.186 3106.892 - 3131.164: 99.4071% ( 1) 00:18:02.186 3203.982 - 3228.255: 99.4145% ( 1) 00:18:02.186 3980.705 - 4004.978: 99.9036% ( 66) 00:18:02.186 4004.978 - 4029.250: 99.9852% ( 11) 00:18:02.186 4975.881 - 5000.154: 100.0000% ( 2) 00:18:02.186 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:02.186 [ 00:18:02.186 { 00:18:02.186 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:02.186 "subtype": "Discovery", 00:18:02.186 "listen_addresses": [], 00:18:02.186 "allow_any_host": true, 00:18:02.186 "hosts": [] 00:18:02.186 }, 00:18:02.186 { 00:18:02.186 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:02.186 "subtype": "NVMe", 00:18:02.186 "listen_addresses": [ 00:18:02.186 { 00:18:02.186 "trtype": "VFIOUSER", 00:18:02.186 "adrfam": "IPv4", 00:18:02.186 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:02.186 "trsvcid": "0" 00:18:02.186 } 00:18:02.186 ], 00:18:02.186 "allow_any_host": true, 00:18:02.186 "hosts": [], 00:18:02.186 "serial_number": "SPDK1", 00:18:02.186 "model_number": "SPDK bdev Controller", 00:18:02.186 "max_namespaces": 32, 00:18:02.186 "min_cntlid": 1, 00:18:02.186 "max_cntlid": 65519, 00:18:02.186 "namespaces": [ 00:18:02.186 { 00:18:02.186 "nsid": 1, 00:18:02.186 "bdev_name": "Malloc1", 00:18:02.186 "name": "Malloc1", 00:18:02.186 "nguid": "6EBAACFD7B724A1D9BAC3E8514D48ECB", 00:18:02.186 "uuid": "6ebaacfd-7b72-4a1d-9bac-3e8514d48ecb" 00:18:02.186 } 00:18:02.186 ] 00:18:02.186 }, 00:18:02.186 { 00:18:02.186 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:02.186 "subtype": "NVMe", 00:18:02.186 "listen_addresses": [ 00:18:02.186 { 00:18:02.186 "trtype": "VFIOUSER", 00:18:02.186 "adrfam": "IPv4", 00:18:02.186 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:02.186 "trsvcid": "0" 00:18:02.186 } 00:18:02.186 ], 00:18:02.186 "allow_any_host": true, 00:18:02.186 "hosts": [], 00:18:02.186 "serial_number": "SPDK2", 00:18:02.186 "model_number": "SPDK bdev Controller", 00:18:02.186 "max_namespaces": 32, 00:18:02.186 "min_cntlid": 1, 00:18:02.186 "max_cntlid": 65519, 00:18:02.186 "namespaces": [ 00:18:02.186 { 00:18:02.186 "nsid": 1, 00:18:02.186 "bdev_name": "Malloc2", 00:18:02.186 "name": "Malloc2", 00:18:02.186 "nguid": "5AF2BB2136844B3EA00EFDB90238DB04", 00:18:02.186 "uuid": "5af2bb21-3684-4b3e-a00e-fdb90238db04" 00:18:02.186 } 00:18:02.186 ] 00:18:02.186 } 00:18:02.186 ] 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1622557 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:02.186 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:02.186 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.444 [2024-07-25 05:37:55.975370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:02.444 Malloc3 00:18:02.444 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:02.701 [2024-07-25 05:37:56.338895] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:02.701 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:02.701 Asynchronous Event Request test 00:18:02.701 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:02.701 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:02.701 Registering asynchronous event callbacks... 00:18:02.701 Starting namespace attribute notice tests for all controllers... 00:18:02.701 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:02.701 aer_cb - Changed Namespace 00:18:02.701 Cleaning up... 00:18:02.959 [ 00:18:02.959 { 00:18:02.959 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:02.959 "subtype": "Discovery", 00:18:02.959 "listen_addresses": [], 00:18:02.959 "allow_any_host": true, 00:18:02.959 "hosts": [] 00:18:02.959 }, 00:18:02.959 { 00:18:02.959 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:02.959 "subtype": "NVMe", 00:18:02.959 "listen_addresses": [ 00:18:02.959 { 00:18:02.959 "trtype": "VFIOUSER", 00:18:02.959 "adrfam": "IPv4", 00:18:02.959 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:02.959 "trsvcid": "0" 00:18:02.959 } 00:18:02.959 ], 00:18:02.959 "allow_any_host": true, 00:18:02.959 "hosts": [], 00:18:02.959 "serial_number": "SPDK1", 00:18:02.959 "model_number": "SPDK bdev Controller", 00:18:02.959 "max_namespaces": 32, 00:18:02.959 "min_cntlid": 1, 00:18:02.959 "max_cntlid": 65519, 00:18:02.959 "namespaces": [ 00:18:02.959 { 00:18:02.959 "nsid": 1, 00:18:02.959 "bdev_name": "Malloc1", 00:18:02.959 "name": "Malloc1", 00:18:02.959 "nguid": "6EBAACFD7B724A1D9BAC3E8514D48ECB", 00:18:02.959 "uuid": "6ebaacfd-7b72-4a1d-9bac-3e8514d48ecb" 00:18:02.959 }, 00:18:02.959 { 00:18:02.959 "nsid": 2, 00:18:02.959 "bdev_name": "Malloc3", 00:18:02.959 "name": "Malloc3", 00:18:02.959 "nguid": "2A267E2151E840B1A1314ED333A44C99", 00:18:02.959 "uuid": "2a267e21-51e8-40b1-a131-4ed333a44c99" 00:18:02.959 } 00:18:02.959 ] 00:18:02.959 }, 00:18:02.959 { 00:18:02.959 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:02.959 "subtype": "NVMe", 00:18:02.959 "listen_addresses": [ 00:18:02.959 { 00:18:02.959 "trtype": "VFIOUSER", 00:18:02.959 "adrfam": "IPv4", 00:18:02.959 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:02.959 "trsvcid": "0" 00:18:02.959 } 00:18:02.959 ], 00:18:02.959 "allow_any_host": true, 00:18:02.959 "hosts": [], 00:18:02.959 "serial_number": "SPDK2", 00:18:02.959 "model_number": "SPDK bdev Controller", 00:18:02.959 "max_namespaces": 32, 00:18:02.959 "min_cntlid": 1, 00:18:02.959 "max_cntlid": 65519, 00:18:02.959 "namespaces": [ 00:18:02.959 { 00:18:02.959 "nsid": 1, 00:18:02.959 "bdev_name": "Malloc2", 00:18:02.959 "name": "Malloc2", 00:18:02.959 "nguid": "5AF2BB2136844B3EA00EFDB90238DB04", 00:18:02.959 "uuid": "5af2bb21-3684-4b3e-a00e-fdb90238db04" 00:18:02.959 } 00:18:02.959 ] 00:18:02.959 } 00:18:02.959 ] 00:18:02.959 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1622557 00:18:02.959 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:02.959 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:02.959 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:02.959 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:02.959 [2024-07-25 05:37:56.649274] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:18:02.959 [2024-07-25 05:37:56.649319] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622694 ] 00:18:02.959 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.219 [2024-07-25 05:37:56.684805] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:03.219 [2024-07-25 05:37:56.692616] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:03.219 [2024-07-25 05:37:56.692646] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2d0a2da000 00:18:03.219 [2024-07-25 05:37:56.693612] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:03.219 [2024-07-25 05:37:56.694614] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:03.219 [2024-07-25 05:37:56.695625] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:03.219 [2024-07-25 05:37:56.696629] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:03.219 [2024-07-25 05:37:56.697636] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:03.219 [2024-07-25 05:37:56.698637] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:03.219 [2024-07-25 05:37:56.699650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:03.219 [2024-07-25 05:37:56.700655] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:03.219 [2024-07-25 05:37:56.701664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:03.219 [2024-07-25 05:37:56.701687] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2d0908e000 00:18:03.219 [2024-07-25 05:37:56.702842] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:03.219 [2024-07-25 05:37:56.719073] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:03.219 [2024-07-25 05:37:56.719113] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:03.219 [2024-07-25 05:37:56.721214] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:03.219 [2024-07-25 05:37:56.721292] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:03.219 [2024-07-25 05:37:56.721398] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:03.219 [2024-07-25 05:37:56.721430] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:03.219 [2024-07-25 05:37:56.721442] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:03.219 [2024-07-25 05:37:56.723253] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:03.219 [2024-07-25 05:37:56.723284] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:03.219 [2024-07-25 05:37:56.723300] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:03.219 [2024-07-25 05:37:56.724251] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:03.219 [2024-07-25 05:37:56.724273] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:03.219 [2024-07-25 05:37:56.724288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:03.219 [2024-07-25 05:37:56.725252] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:03.219 [2024-07-25 05:37:56.725278] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:03.219 [2024-07-25 05:37:56.726248] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:03.219 [2024-07-25 05:37:56.726284] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:03.219 [2024-07-25 05:37:56.726294] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:03.219 [2024-07-25 05:37:56.726306] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:03.219 [2024-07-25 05:37:56.726415] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:03.219 [2024-07-25 05:37:56.726424] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:03.219 [2024-07-25 05:37:56.726434] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:03.219 [2024-07-25 05:37:56.727271] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:03.219 [2024-07-25 05:37:56.728276] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:03.219 [2024-07-25 05:37:56.729289] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:03.219 [2024-07-25 05:37:56.730281] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:03.219 [2024-07-25 05:37:56.730356] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:03.219 [2024-07-25 05:37:56.731300] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:03.219 [2024-07-25 05:37:56.731322] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:03.220 [2024-07-25 05:37:56.731331] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.731357] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:03.220 [2024-07-25 05:37:56.731371] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.731403] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:03.220 [2024-07-25 05:37:56.731414] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:03.220 [2024-07-25 05:37:56.731421] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.220 [2024-07-25 05:37:56.731444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:03.220 [2024-07-25 05:37:56.742259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:03.220 [2024-07-25 05:37:56.742287] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:03.220 [2024-07-25 05:37:56.742296] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:03.220 [2024-07-25 05:37:56.742309] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:03.220 [2024-07-25 05:37:56.742318] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:03.220 [2024-07-25 05:37:56.742327] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:03.220 [2024-07-25 05:37:56.742335] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:03.220 [2024-07-25 05:37:56.742344] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.742358] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.742380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:03.220 [2024-07-25 05:37:56.750251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:03.220 [2024-07-25 05:37:56.750288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.220 [2024-07-25 05:37:56.750306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.220 [2024-07-25 05:37:56.750321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.220 [2024-07-25 05:37:56.750336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.220 [2024-07-25 05:37:56.750347] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.750366] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.750385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:03.220 [2024-07-25 05:37:56.758251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:03.220 [2024-07-25 05:37:56.758271] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:03.220 [2024-07-25 05:37:56.758281] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.758299] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.758311] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.758327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:03.220 [2024-07-25 05:37:56.766254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:03.220 [2024-07-25 05:37:56.766332] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.766350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.766365] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:03.220 [2024-07-25 05:37:56.766378] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:03.220 [2024-07-25 05:37:56.766385] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.220 [2024-07-25 05:37:56.766395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:03.220 [2024-07-25 05:37:56.774255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:03.220 [2024-07-25 05:37:56.774281] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:03.220 [2024-07-25 05:37:56.774304] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.774321] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.774334] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:03.220 [2024-07-25 05:37:56.774342] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:03.220 [2024-07-25 05:37:56.774348] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.220 [2024-07-25 05:37:56.774358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:03.220 [2024-07-25 05:37:56.782252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:03.220 [2024-07-25 05:37:56.782284] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.782301] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.782314] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:03.220 [2024-07-25 05:37:56.782322] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:03.220 [2024-07-25 05:37:56.782329] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.220 [2024-07-25 05:37:56.782338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:03.220 [2024-07-25 05:37:56.790255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:03.220 [2024-07-25 05:37:56.790287] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.790300] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.790317] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.790332] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.790341] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.790350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.790359] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:03.220 [2024-07-25 05:37:56.790370] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:03.220 [2024-07-25 05:37:56.790379] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:03.220 [2024-07-25 05:37:56.790408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:03.220 [2024-07-25 05:37:56.798268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:03.220 [2024-07-25 05:37:56.798294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:03.220 [2024-07-25 05:37:56.806253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:03.220 [2024-07-25 05:37:56.806278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:03.220 [2024-07-25 05:37:56.814257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:03.220 [2024-07-25 05:37:56.814282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:03.220 [2024-07-25 05:37:56.822251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:03.220 [2024-07-25 05:37:56.822284] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:03.220 [2024-07-25 05:37:56.822295] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:03.220 [2024-07-25 05:37:56.822302] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:03.220 [2024-07-25 05:37:56.822308] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:03.220 [2024-07-25 05:37:56.822314] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:03.221 [2024-07-25 05:37:56.822323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:03.221 [2024-07-25 05:37:56.822335] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:03.221 [2024-07-25 05:37:56.822343] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:03.221 [2024-07-25 05:37:56.822349] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.221 [2024-07-25 05:37:56.822358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:03.221 [2024-07-25 05:37:56.822369] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:03.221 [2024-07-25 05:37:56.822377] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:03.221 [2024-07-25 05:37:56.822383] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.221 [2024-07-25 05:37:56.822391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:03.221 [2024-07-25 05:37:56.822403] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:03.221 [2024-07-25 05:37:56.822411] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:03.221 [2024-07-25 05:37:56.822417] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.221 [2024-07-25 05:37:56.822426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:03.221 [2024-07-25 05:37:56.830251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:03.221 [2024-07-25 05:37:56.830280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:03.221 [2024-07-25 05:37:56.830299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:03.221 [2024-07-25 05:37:56.830311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:03.221 ===================================================== 00:18:03.221 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:03.221 ===================================================== 00:18:03.221 Controller Capabilities/Features 00:18:03.221 ================================ 00:18:03.221 Vendor ID: 4e58 00:18:03.221 Subsystem Vendor ID: 4e58 00:18:03.221 Serial Number: SPDK2 00:18:03.221 Model Number: SPDK bdev Controller 00:18:03.221 Firmware Version: 24.09 00:18:03.221 Recommended Arb Burst: 6 00:18:03.221 IEEE OUI Identifier: 8d 6b 50 00:18:03.221 Multi-path I/O 00:18:03.221 May have multiple subsystem ports: Yes 00:18:03.221 May have multiple controllers: Yes 00:18:03.221 Associated with SR-IOV VF: No 00:18:03.221 Max Data Transfer Size: 131072 00:18:03.221 Max Number of Namespaces: 32 00:18:03.221 Max Number of I/O Queues: 127 00:18:03.221 NVMe Specification Version (VS): 1.3 00:18:03.221 NVMe Specification Version (Identify): 1.3 00:18:03.221 Maximum Queue Entries: 256 00:18:03.221 Contiguous Queues Required: Yes 00:18:03.221 Arbitration Mechanisms Supported 00:18:03.221 Weighted Round Robin: Not Supported 00:18:03.221 Vendor Specific: Not Supported 00:18:03.221 Reset Timeout: 15000 ms 00:18:03.221 Doorbell Stride: 4 bytes 00:18:03.221 NVM Subsystem Reset: Not Supported 00:18:03.221 Command Sets Supported 00:18:03.221 NVM Command Set: Supported 00:18:03.221 Boot Partition: Not Supported 00:18:03.221 Memory Page Size Minimum: 4096 bytes 00:18:03.221 Memory Page Size Maximum: 4096 bytes 00:18:03.221 Persistent Memory Region: Not Supported 00:18:03.221 Optional Asynchronous Events Supported 00:18:03.221 Namespace Attribute Notices: Supported 00:18:03.221 Firmware Activation Notices: Not Supported 00:18:03.221 ANA Change Notices: Not Supported 00:18:03.221 PLE Aggregate Log Change Notices: Not Supported 00:18:03.221 LBA Status Info Alert Notices: Not Supported 00:18:03.221 EGE Aggregate Log Change Notices: Not Supported 00:18:03.221 Normal NVM Subsystem Shutdown event: Not Supported 00:18:03.221 Zone Descriptor Change Notices: Not Supported 00:18:03.221 Discovery Log Change Notices: Not Supported 00:18:03.221 Controller Attributes 00:18:03.221 128-bit Host Identifier: Supported 00:18:03.221 Non-Operational Permissive Mode: Not Supported 00:18:03.221 NVM Sets: Not Supported 00:18:03.221 Read Recovery Levels: Not Supported 00:18:03.221 Endurance Groups: Not Supported 00:18:03.221 Predictable Latency Mode: Not Supported 00:18:03.221 Traffic Based Keep ALive: Not Supported 00:18:03.221 Namespace Granularity: Not Supported 00:18:03.221 SQ Associations: Not Supported 00:18:03.221 UUID List: Not Supported 00:18:03.221 Multi-Domain Subsystem: Not Supported 00:18:03.221 Fixed Capacity Management: Not Supported 00:18:03.221 Variable Capacity Management: Not Supported 00:18:03.221 Delete Endurance Group: Not Supported 00:18:03.221 Delete NVM Set: Not Supported 00:18:03.221 Extended LBA Formats Supported: Not Supported 00:18:03.221 Flexible Data Placement Supported: Not Supported 00:18:03.221 00:18:03.221 Controller Memory Buffer Support 00:18:03.221 ================================ 00:18:03.221 Supported: No 00:18:03.221 00:18:03.221 Persistent Memory Region Support 00:18:03.221 ================================ 00:18:03.221 Supported: No 00:18:03.221 00:18:03.221 Admin Command Set Attributes 00:18:03.221 ============================ 00:18:03.221 Security Send/Receive: Not Supported 00:18:03.221 Format NVM: Not Supported 00:18:03.221 Firmware Activate/Download: Not Supported 00:18:03.221 Namespace Management: Not Supported 00:18:03.221 Device Self-Test: Not Supported 00:18:03.221 Directives: Not Supported 00:18:03.221 NVMe-MI: Not Supported 00:18:03.221 Virtualization Management: Not Supported 00:18:03.221 Doorbell Buffer Config: Not Supported 00:18:03.221 Get LBA Status Capability: Not Supported 00:18:03.221 Command & Feature Lockdown Capability: Not Supported 00:18:03.221 Abort Command Limit: 4 00:18:03.221 Async Event Request Limit: 4 00:18:03.221 Number of Firmware Slots: N/A 00:18:03.221 Firmware Slot 1 Read-Only: N/A 00:18:03.221 Firmware Activation Without Reset: N/A 00:18:03.221 Multiple Update Detection Support: N/A 00:18:03.221 Firmware Update Granularity: No Information Provided 00:18:03.221 Per-Namespace SMART Log: No 00:18:03.221 Asymmetric Namespace Access Log Page: Not Supported 00:18:03.221 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:03.221 Command Effects Log Page: Supported 00:18:03.221 Get Log Page Extended Data: Supported 00:18:03.221 Telemetry Log Pages: Not Supported 00:18:03.221 Persistent Event Log Pages: Not Supported 00:18:03.221 Supported Log Pages Log Page: May Support 00:18:03.221 Commands Supported & Effects Log Page: Not Supported 00:18:03.221 Feature Identifiers & Effects Log Page:May Support 00:18:03.221 NVMe-MI Commands & Effects Log Page: May Support 00:18:03.221 Data Area 4 for Telemetry Log: Not Supported 00:18:03.221 Error Log Page Entries Supported: 128 00:18:03.221 Keep Alive: Supported 00:18:03.221 Keep Alive Granularity: 10000 ms 00:18:03.221 00:18:03.221 NVM Command Set Attributes 00:18:03.221 ========================== 00:18:03.221 Submission Queue Entry Size 00:18:03.221 Max: 64 00:18:03.221 Min: 64 00:18:03.221 Completion Queue Entry Size 00:18:03.221 Max: 16 00:18:03.221 Min: 16 00:18:03.221 Number of Namespaces: 32 00:18:03.221 Compare Command: Supported 00:18:03.221 Write Uncorrectable Command: Not Supported 00:18:03.221 Dataset Management Command: Supported 00:18:03.221 Write Zeroes Command: Supported 00:18:03.221 Set Features Save Field: Not Supported 00:18:03.221 Reservations: Not Supported 00:18:03.221 Timestamp: Not Supported 00:18:03.221 Copy: Supported 00:18:03.221 Volatile Write Cache: Present 00:18:03.221 Atomic Write Unit (Normal): 1 00:18:03.221 Atomic Write Unit (PFail): 1 00:18:03.221 Atomic Compare & Write Unit: 1 00:18:03.221 Fused Compare & Write: Supported 00:18:03.221 Scatter-Gather List 00:18:03.221 SGL Command Set: Supported (Dword aligned) 00:18:03.221 SGL Keyed: Not Supported 00:18:03.221 SGL Bit Bucket Descriptor: Not Supported 00:18:03.221 SGL Metadata Pointer: Not Supported 00:18:03.221 Oversized SGL: Not Supported 00:18:03.221 SGL Metadata Address: Not Supported 00:18:03.221 SGL Offset: Not Supported 00:18:03.221 Transport SGL Data Block: Not Supported 00:18:03.221 Replay Protected Memory Block: Not Supported 00:18:03.221 00:18:03.221 Firmware Slot Information 00:18:03.221 ========================= 00:18:03.221 Active slot: 1 00:18:03.221 Slot 1 Firmware Revision: 24.09 00:18:03.221 00:18:03.221 00:18:03.221 Commands Supported and Effects 00:18:03.221 ============================== 00:18:03.221 Admin Commands 00:18:03.222 -------------- 00:18:03.222 Get Log Page (02h): Supported 00:18:03.222 Identify (06h): Supported 00:18:03.222 Abort (08h): Supported 00:18:03.222 Set Features (09h): Supported 00:18:03.222 Get Features (0Ah): Supported 00:18:03.222 Asynchronous Event Request (0Ch): Supported 00:18:03.222 Keep Alive (18h): Supported 00:18:03.222 I/O Commands 00:18:03.222 ------------ 00:18:03.222 Flush (00h): Supported LBA-Change 00:18:03.222 Write (01h): Supported LBA-Change 00:18:03.222 Read (02h): Supported 00:18:03.222 Compare (05h): Supported 00:18:03.222 Write Zeroes (08h): Supported LBA-Change 00:18:03.222 Dataset Management (09h): Supported LBA-Change 00:18:03.222 Copy (19h): Supported LBA-Change 00:18:03.222 00:18:03.222 Error Log 00:18:03.222 ========= 00:18:03.222 00:18:03.222 Arbitration 00:18:03.222 =========== 00:18:03.222 Arbitration Burst: 1 00:18:03.222 00:18:03.222 Power Management 00:18:03.222 ================ 00:18:03.222 Number of Power States: 1 00:18:03.222 Current Power State: Power State #0 00:18:03.222 Power State #0: 00:18:03.222 Max Power: 0.00 W 00:18:03.222 Non-Operational State: Operational 00:18:03.222 Entry Latency: Not Reported 00:18:03.222 Exit Latency: Not Reported 00:18:03.222 Relative Read Throughput: 0 00:18:03.222 Relative Read Latency: 0 00:18:03.222 Relative Write Throughput: 0 00:18:03.222 Relative Write Latency: 0 00:18:03.222 Idle Power: Not Reported 00:18:03.222 Active Power: Not Reported 00:18:03.222 Non-Operational Permissive Mode: Not Supported 00:18:03.222 00:18:03.222 Health Information 00:18:03.222 ================== 00:18:03.222 Critical Warnings: 00:18:03.222 Available Spare Space: OK 00:18:03.222 Temperature: OK 00:18:03.222 Device Reliability: OK 00:18:03.222 Read Only: No 00:18:03.222 Volatile Memory Backup: OK 00:18:03.222 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:03.222 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:03.222 Available Spare: 0% 00:18:03.222 Available Sp[2024-07-25 05:37:56.830430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:03.222 [2024-07-25 05:37:56.838254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:03.222 [2024-07-25 05:37:56.838305] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:03.222 [2024-07-25 05:37:56.838324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.222 [2024-07-25 05:37:56.838335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.222 [2024-07-25 05:37:56.838345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.222 [2024-07-25 05:37:56.838355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.222 [2024-07-25 05:37:56.838421] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:03.222 [2024-07-25 05:37:56.838444] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:03.222 [2024-07-25 05:37:56.839424] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:03.222 [2024-07-25 05:37:56.839512] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:03.222 [2024-07-25 05:37:56.839528] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:03.222 [2024-07-25 05:37:56.840430] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:03.222 [2024-07-25 05:37:56.840455] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:03.222 [2024-07-25 05:37:56.840520] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:03.222 [2024-07-25 05:37:56.841761] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:03.222 are Threshold: 0% 00:18:03.222 Life Percentage Used: 0% 00:18:03.222 Data Units Read: 0 00:18:03.222 Data Units Written: 0 00:18:03.222 Host Read Commands: 0 00:18:03.222 Host Write Commands: 0 00:18:03.222 Controller Busy Time: 0 minutes 00:18:03.222 Power Cycles: 0 00:18:03.222 Power On Hours: 0 hours 00:18:03.222 Unsafe Shutdowns: 0 00:18:03.222 Unrecoverable Media Errors: 0 00:18:03.222 Lifetime Error Log Entries: 0 00:18:03.222 Warning Temperature Time: 0 minutes 00:18:03.222 Critical Temperature Time: 0 minutes 00:18:03.222 00:18:03.222 Number of Queues 00:18:03.222 ================ 00:18:03.222 Number of I/O Submission Queues: 127 00:18:03.222 Number of I/O Completion Queues: 127 00:18:03.222 00:18:03.222 Active Namespaces 00:18:03.222 ================= 00:18:03.222 Namespace ID:1 00:18:03.222 Error Recovery Timeout: Unlimited 00:18:03.222 Command Set Identifier: NVM (00h) 00:18:03.222 Deallocate: Supported 00:18:03.222 Deallocated/Unwritten Error: Not Supported 00:18:03.222 Deallocated Read Value: Unknown 00:18:03.222 Deallocate in Write Zeroes: Not Supported 00:18:03.222 Deallocated Guard Field: 0xFFFF 00:18:03.222 Flush: Supported 00:18:03.222 Reservation: Supported 00:18:03.222 Namespace Sharing Capabilities: Multiple Controllers 00:18:03.222 Size (in LBAs): 131072 (0GiB) 00:18:03.222 Capacity (in LBAs): 131072 (0GiB) 00:18:03.222 Utilization (in LBAs): 131072 (0GiB) 00:18:03.222 NGUID: 5AF2BB2136844B3EA00EFDB90238DB04 00:18:03.222 UUID: 5af2bb21-3684-4b3e-a00e-fdb90238db04 00:18:03.222 Thin Provisioning: Not Supported 00:18:03.222 Per-NS Atomic Units: Yes 00:18:03.222 Atomic Boundary Size (Normal): 0 00:18:03.222 Atomic Boundary Size (PFail): 0 00:18:03.222 Atomic Boundary Offset: 0 00:18:03.222 Maximum Single Source Range Length: 65535 00:18:03.222 Maximum Copy Length: 65535 00:18:03.222 Maximum Source Range Count: 1 00:18:03.222 NGUID/EUI64 Never Reused: No 00:18:03.222 Namespace Write Protected: No 00:18:03.222 Number of LBA Formats: 1 00:18:03.222 Current LBA Format: LBA Format #00 00:18:03.222 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:03.222 00:18:03.222 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:03.222 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.480 [2024-07-25 05:37:57.067981] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:08.743 Initializing NVMe Controllers 00:18:08.743 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:08.743 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:08.743 Initialization complete. Launching workers. 00:18:08.743 ======================================================== 00:18:08.743 Latency(us) 00:18:08.743 Device Information : IOPS MiB/s Average min max 00:18:08.743 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35535.88 138.81 3601.19 1165.38 7438.83 00:18:08.743 ======================================================== 00:18:08.743 Total : 35535.88 138.81 3601.19 1165.38 7438.83 00:18:08.743 00:18:08.743 [2024-07-25 05:38:02.173614] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:08.743 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:08.743 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.743 [2024-07-25 05:38:02.405200] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:14.005 Initializing NVMe Controllers 00:18:14.006 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:14.006 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:14.006 Initialization complete. Launching workers. 00:18:14.006 ======================================================== 00:18:14.006 Latency(us) 00:18:14.006 Device Information : IOPS MiB/s Average min max 00:18:14.006 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32770.18 128.01 3905.62 1208.55 7472.54 00:18:14.006 ======================================================== 00:18:14.006 Total : 32770.18 128.01 3905.62 1208.55 7472.54 00:18:14.006 00:18:14.006 [2024-07-25 05:38:07.426749] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:14.006 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:14.006 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.006 [2024-07-25 05:38:07.636603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:19.267 [2024-07-25 05:38:12.765608] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:19.267 Initializing NVMe Controllers 00:18:19.267 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:19.267 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:19.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:19.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:19.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:19.267 Initialization complete. Launching workers. 00:18:19.267 Starting thread on core 2 00:18:19.267 Starting thread on core 3 00:18:19.267 Starting thread on core 1 00:18:19.267 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:19.267 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.525 [2024-07-25 05:38:13.064919] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:22.806 [2024-07-25 05:38:16.119031] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:22.806 Initializing NVMe Controllers 00:18:22.806 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:22.806 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:22.806 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:22.806 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:22.806 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:22.806 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:22.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:22.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:22.806 Initialization complete. Launching workers. 00:18:22.806 Starting thread on core 1 with urgent priority queue 00:18:22.806 Starting thread on core 2 with urgent priority queue 00:18:22.806 Starting thread on core 3 with urgent priority queue 00:18:22.806 Starting thread on core 0 with urgent priority queue 00:18:22.806 SPDK bdev Controller (SPDK2 ) core 0: 6444.33 IO/s 15.52 secs/100000 ios 00:18:22.806 SPDK bdev Controller (SPDK2 ) core 1: 4857.33 IO/s 20.59 secs/100000 ios 00:18:22.806 SPDK bdev Controller (SPDK2 ) core 2: 5865.00 IO/s 17.05 secs/100000 ios 00:18:22.806 SPDK bdev Controller (SPDK2 ) core 3: 5912.67 IO/s 16.91 secs/100000 ios 00:18:22.806 ======================================================== 00:18:22.806 00:18:22.806 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:22.806 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.806 [2024-07-25 05:38:16.410772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:22.806 Initializing NVMe Controllers 00:18:22.806 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:22.806 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:22.806 Namespace ID: 1 size: 0GB 00:18:22.806 Initialization complete. 00:18:22.806 INFO: using host memory buffer for IO 00:18:22.806 Hello world! 00:18:22.806 [2024-07-25 05:38:16.419829] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:22.806 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:23.064 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.064 [2024-07-25 05:38:16.707647] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:24.437 Initializing NVMe Controllers 00:18:24.437 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:24.437 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:24.437 Initialization complete. Launching workers. 00:18:24.437 submit (in ns) avg, min, max = 7137.0, 3493.3, 4014752.2 00:18:24.437 complete (in ns) avg, min, max = 26473.7, 2052.2, 4015438.9 00:18:24.437 00:18:24.437 Submit histogram 00:18:24.437 ================ 00:18:24.437 Range in us Cumulative Count 00:18:24.437 3.484 - 3.508: 0.6011% ( 82) 00:18:24.437 3.508 - 3.532: 1.5761% ( 133) 00:18:24.437 3.532 - 3.556: 4.6404% ( 418) 00:18:24.437 3.556 - 3.579: 10.3438% ( 778) 00:18:24.437 3.579 - 3.603: 19.4414% ( 1241) 00:18:24.437 3.603 - 3.627: 28.6416% ( 1255) 00:18:24.437 3.627 - 3.650: 37.0721% ( 1150) 00:18:24.437 3.650 - 3.674: 43.3399% ( 855) 00:18:24.437 3.674 - 3.698: 49.8057% ( 882) 00:18:24.437 3.698 - 3.721: 55.9343% ( 836) 00:18:24.437 3.721 - 3.745: 60.1422% ( 574) 00:18:24.437 3.745 - 3.769: 63.5511% ( 465) 00:18:24.437 3.769 - 3.793: 66.5934% ( 415) 00:18:24.437 3.793 - 3.816: 70.1268% ( 482) 00:18:24.437 3.816 - 3.840: 74.2174% ( 558) 00:18:24.437 3.840 - 3.864: 78.7699% ( 621) 00:18:24.437 3.864 - 3.887: 82.0614% ( 449) 00:18:24.437 3.887 - 3.911: 84.6126% ( 348) 00:18:24.437 3.911 - 3.935: 86.9804% ( 323) 00:18:24.437 3.935 - 3.959: 88.9451% ( 268) 00:18:24.437 3.959 - 3.982: 90.4552% ( 206) 00:18:24.437 3.982 - 4.006: 91.5402% ( 148) 00:18:24.437 4.006 - 4.030: 92.6178% ( 147) 00:18:24.437 4.030 - 4.053: 93.6515% ( 141) 00:18:24.437 4.053 - 4.077: 94.5092% ( 117) 00:18:24.437 4.077 - 4.101: 95.1763% ( 91) 00:18:24.437 4.101 - 4.124: 95.7041% ( 72) 00:18:24.437 4.124 - 4.148: 96.1293% ( 58) 00:18:24.437 4.148 - 4.172: 96.4225% ( 40) 00:18:24.437 4.172 - 4.196: 96.6205% ( 27) 00:18:24.437 4.196 - 4.219: 96.8257% ( 28) 00:18:24.437 4.219 - 4.243: 96.9944% ( 23) 00:18:24.437 4.243 - 4.267: 97.0897% ( 13) 00:18:24.437 4.267 - 4.290: 97.1850% ( 13) 00:18:24.437 4.290 - 4.314: 97.3096% ( 17) 00:18:24.437 4.314 - 4.338: 97.3609% ( 7) 00:18:24.437 4.338 - 4.361: 97.3902% ( 4) 00:18:24.437 4.361 - 4.385: 97.4049% ( 2) 00:18:24.437 4.385 - 4.409: 97.4342% ( 4) 00:18:24.437 4.409 - 4.433: 97.4782% ( 6) 00:18:24.437 4.433 - 4.456: 97.4855% ( 1) 00:18:24.437 4.456 - 4.480: 97.4929% ( 1) 00:18:24.437 4.527 - 4.551: 97.5002% ( 1) 00:18:24.437 4.551 - 4.575: 97.5075% ( 1) 00:18:24.437 4.599 - 4.622: 97.5148% ( 1) 00:18:24.437 4.646 - 4.670: 97.5515% ( 5) 00:18:24.437 4.670 - 4.693: 97.5735% ( 3) 00:18:24.437 4.693 - 4.717: 97.6101% ( 5) 00:18:24.437 4.717 - 4.741: 97.6321% ( 3) 00:18:24.437 4.741 - 4.764: 97.6761% ( 6) 00:18:24.437 4.764 - 4.788: 97.7421% ( 9) 00:18:24.437 4.788 - 4.812: 97.7788% ( 5) 00:18:24.437 4.812 - 4.836: 97.8227% ( 6) 00:18:24.437 4.836 - 4.859: 97.8594% ( 5) 00:18:24.437 4.859 - 4.883: 97.9107% ( 7) 00:18:24.437 4.883 - 4.907: 97.9400% ( 4) 00:18:24.437 4.907 - 4.930: 97.9694% ( 4) 00:18:24.437 4.930 - 4.954: 98.0280% ( 8) 00:18:24.437 4.954 - 4.978: 98.0573% ( 4) 00:18:24.437 4.978 - 5.001: 98.0940% ( 5) 00:18:24.437 5.001 - 5.025: 98.1233% ( 4) 00:18:24.437 5.025 - 5.049: 98.1380% ( 2) 00:18:24.437 5.049 - 5.073: 98.1673% ( 4) 00:18:24.437 5.073 - 5.096: 98.1966% ( 4) 00:18:24.437 5.096 - 5.120: 98.2039% ( 1) 00:18:24.437 5.120 - 5.144: 98.2186% ( 2) 00:18:24.437 5.144 - 5.167: 98.2406% ( 3) 00:18:24.437 5.167 - 5.191: 98.2699% ( 4) 00:18:24.437 5.191 - 5.215: 98.2773% ( 1) 00:18:24.437 5.215 - 5.239: 98.2846% ( 1) 00:18:24.437 5.262 - 5.286: 98.2992% ( 2) 00:18:24.437 5.286 - 5.310: 98.3139% ( 2) 00:18:24.437 5.310 - 5.333: 98.3212% ( 1) 00:18:24.437 5.357 - 5.381: 98.3359% ( 2) 00:18:24.437 5.381 - 5.404: 98.3579% ( 3) 00:18:24.437 5.404 - 5.428: 98.3652% ( 1) 00:18:24.437 5.547 - 5.570: 98.3726% ( 1) 00:18:24.437 5.594 - 5.618: 98.3799% ( 1) 00:18:24.437 5.665 - 5.689: 98.3872% ( 1) 00:18:24.437 5.713 - 5.736: 98.3945% ( 1) 00:18:24.437 6.021 - 6.044: 98.4019% ( 1) 00:18:24.437 6.400 - 6.447: 98.4092% ( 1) 00:18:24.437 6.447 - 6.495: 98.4165% ( 1) 00:18:24.437 6.779 - 6.827: 98.4239% ( 1) 00:18:24.437 6.874 - 6.921: 98.4312% ( 1) 00:18:24.437 6.921 - 6.969: 98.4385% ( 1) 00:18:24.437 6.969 - 7.016: 98.4459% ( 1) 00:18:24.437 7.016 - 7.064: 98.4532% ( 1) 00:18:24.437 7.064 - 7.111: 98.4679% ( 2) 00:18:24.437 7.111 - 7.159: 98.4752% ( 1) 00:18:24.437 7.159 - 7.206: 98.4825% ( 1) 00:18:24.437 7.253 - 7.301: 98.4898% ( 1) 00:18:24.437 7.348 - 7.396: 98.4972% ( 1) 00:18:24.437 7.396 - 7.443: 98.5118% ( 2) 00:18:24.437 7.490 - 7.538: 98.5338% ( 3) 00:18:24.437 7.538 - 7.585: 98.5485% ( 2) 00:18:24.437 7.585 - 7.633: 98.5632% ( 2) 00:18:24.437 7.680 - 7.727: 98.5778% ( 2) 00:18:24.437 7.775 - 7.822: 98.5851% ( 1) 00:18:24.437 7.822 - 7.870: 98.5925% ( 1) 00:18:24.437 7.870 - 7.917: 98.6071% ( 2) 00:18:24.437 7.917 - 7.964: 98.6145% ( 1) 00:18:24.437 7.964 - 8.012: 98.6291% ( 2) 00:18:24.437 8.059 - 8.107: 98.6365% ( 1) 00:18:24.437 8.107 - 8.154: 98.6511% ( 2) 00:18:24.437 8.201 - 8.249: 98.6585% ( 1) 00:18:24.437 8.249 - 8.296: 98.6658% ( 1) 00:18:24.437 8.344 - 8.391: 98.6878% ( 3) 00:18:24.437 8.486 - 8.533: 98.6951% ( 1) 00:18:24.437 8.533 - 8.581: 98.7098% ( 2) 00:18:24.437 8.913 - 8.960: 98.7171% ( 1) 00:18:24.437 9.007 - 9.055: 98.7244% ( 1) 00:18:24.437 9.102 - 9.150: 98.7318% ( 1) 00:18:24.437 9.197 - 9.244: 98.7391% ( 1) 00:18:24.437 9.292 - 9.339: 98.7464% ( 1) 00:18:24.437 9.529 - 9.576: 98.7538% ( 1) 00:18:24.437 9.624 - 9.671: 98.7611% ( 1) 00:18:24.437 9.813 - 9.861: 98.7684% ( 1) 00:18:24.437 10.050 - 10.098: 98.7757% ( 1) 00:18:24.437 10.098 - 10.145: 98.7831% ( 1) 00:18:24.437 10.382 - 10.430: 98.7977% ( 2) 00:18:24.437 10.524 - 10.572: 98.8124% ( 2) 00:18:24.437 10.619 - 10.667: 98.8197% ( 1) 00:18:24.437 10.714 - 10.761: 98.8271% ( 1) 00:18:24.437 10.761 - 10.809: 98.8344% ( 1) 00:18:24.437 10.904 - 10.951: 98.8417% ( 1) 00:18:24.437 10.951 - 10.999: 98.8491% ( 1) 00:18:24.437 11.046 - 11.093: 98.8637% ( 2) 00:18:24.437 11.093 - 11.141: 98.8711% ( 1) 00:18:24.437 11.236 - 11.283: 98.8857% ( 2) 00:18:24.437 11.378 - 11.425: 98.8930% ( 1) 00:18:24.437 11.520 - 11.567: 98.9004% ( 1) 00:18:24.437 11.615 - 11.662: 98.9077% ( 1) 00:18:24.437 11.757 - 11.804: 98.9150% ( 1) 00:18:24.437 11.899 - 11.947: 98.9224% ( 1) 00:18:24.437 12.041 - 12.089: 98.9297% ( 1) 00:18:24.437 12.136 - 12.231: 98.9444% ( 2) 00:18:24.437 12.326 - 12.421: 98.9517% ( 1) 00:18:24.437 12.610 - 12.705: 98.9590% ( 1) 00:18:24.437 12.705 - 12.800: 98.9664% ( 1) 00:18:24.437 12.800 - 12.895: 98.9737% ( 1) 00:18:24.437 12.990 - 13.084: 98.9810% ( 1) 00:18:24.437 13.179 - 13.274: 98.9883% ( 1) 00:18:24.437 13.559 - 13.653: 99.0030% ( 2) 00:18:24.437 13.653 - 13.748: 99.0103% ( 1) 00:18:24.437 14.507 - 14.601: 99.0177% ( 1) 00:18:24.438 14.791 - 14.886: 99.0250% ( 1) 00:18:24.438 14.886 - 14.981: 99.0323% ( 1) 00:18:24.438 16.782 - 16.877: 99.0397% ( 1) 00:18:24.438 17.256 - 17.351: 99.0470% ( 1) 00:18:24.438 17.351 - 17.446: 99.0690% ( 3) 00:18:24.438 17.446 - 17.541: 99.1056% ( 5) 00:18:24.438 17.541 - 17.636: 99.1423% ( 5) 00:18:24.438 17.636 - 17.730: 99.2156% ( 10) 00:18:24.438 17.730 - 17.825: 99.2449% ( 4) 00:18:24.438 17.825 - 17.920: 99.2816% ( 5) 00:18:24.438 17.920 - 18.015: 99.3256% ( 6) 00:18:24.438 18.015 - 18.110: 99.3402% ( 2) 00:18:24.438 18.110 - 18.204: 99.4429% ( 14) 00:18:24.438 18.204 - 18.299: 99.5162% ( 10) 00:18:24.438 18.299 - 18.394: 99.5675% ( 7) 00:18:24.438 18.394 - 18.489: 99.6115% ( 6) 00:18:24.438 18.489 - 18.584: 99.6628% ( 7) 00:18:24.438 18.584 - 18.679: 99.6921% ( 4) 00:18:24.438 18.679 - 18.773: 99.7434% ( 7) 00:18:24.438 18.773 - 18.868: 99.7654% ( 3) 00:18:24.438 18.963 - 19.058: 99.8094% ( 6) 00:18:24.438 19.058 - 19.153: 99.8314% ( 3) 00:18:24.438 19.153 - 19.247: 99.8387% ( 1) 00:18:24.438 19.247 - 19.342: 99.8534% ( 2) 00:18:24.438 19.342 - 19.437: 99.8607% ( 1) 00:18:24.438 19.437 - 19.532: 99.8680% ( 1) 00:18:24.438 19.532 - 19.627: 99.8754% ( 1) 00:18:24.438 19.721 - 19.816: 99.8900% ( 2) 00:18:24.438 19.816 - 19.911: 99.8974% ( 1) 00:18:24.438 21.618 - 21.713: 99.9047% ( 1) 00:18:24.438 22.187 - 22.281: 99.9120% ( 1) 00:18:24.438 23.893 - 23.988: 99.9194% ( 1) 00:18:24.438 3980.705 - 4004.978: 99.9853% ( 9) 00:18:24.438 4004.978 - 4029.250: 100.0000% ( 2) 00:18:24.438 00:18:24.438 Complete histogram 00:18:24.438 ================== 00:18:24.438 Range in us Cumulative Count 00:18:24.438 2.050 - 2.062: 3.4528% ( 471) 00:18:24.438 2.062 - 2.074: 33.2747% ( 4068) 00:18:24.438 2.074 - 2.086: 40.5322% ( 990) 00:18:24.438 2.086 - 2.098: 47.3059% ( 924) 00:18:24.438 2.098 - 2.110: 58.6614% ( 1549) 00:18:24.438 2.110 - 2.121: 61.3738% ( 370) 00:18:24.438 2.121 - 2.133: 66.5787% ( 710) 00:18:24.438 2.133 - 2.145: 73.4550% ( 938) 00:18:24.438 2.145 - 2.157: 74.8112% ( 185) 00:18:24.438 2.157 - 2.169: 78.2054% ( 463) 00:18:24.438 2.169 - 2.181: 80.9545% ( 375) 00:18:24.438 2.181 - 2.193: 81.6656% ( 97) 00:18:24.438 2.193 - 2.204: 83.8135% ( 293) 00:18:24.438 2.204 - 2.216: 87.5889% ( 515) 00:18:24.438 2.216 - 2.228: 89.8395% ( 307) 00:18:24.438 2.228 - 2.240: 91.6648% ( 249) 00:18:24.438 2.240 - 2.252: 93.2410% ( 215) 00:18:24.438 2.252 - 2.264: 93.6662% ( 58) 00:18:24.438 2.264 - 2.276: 93.9887% ( 44) 00:18:24.438 2.276 - 2.287: 94.5092% ( 71) 00:18:24.438 2.287 - 2.299: 95.2423% ( 100) 00:18:24.438 2.299 - 2.311: 95.4842% ( 33) 00:18:24.438 2.311 - 2.323: 95.5648% ( 11) 00:18:24.438 2.323 - 2.335: 95.6601% ( 13) 00:18:24.438 2.335 - 2.347: 95.7481% ( 12) 00:18:24.438 2.347 - 2.359: 95.9827% ( 32) 00:18:24.438 2.359 - 2.370: 96.3932% ( 56) 00:18:24.438 2.370 - 2.382: 96.9210% ( 72) 00:18:24.438 2.382 - 2.394: 97.2436% ( 44) 00:18:24.438 2.394 - 2.406: 97.5222% ( 38) 00:18:24.438 2.406 - 2.418: 97.6688% ( 20) 00:18:24.438 2.418 - 2.430: 97.7934% ( 17) 00:18:24.438 2.430 - 2.441: 97.8814% ( 12) 00:18:24.438 2.441 - 2.453: 97.9767% ( 13) 00:18:24.438 2.453 - 2.465: 98.0573% ( 11) 00:18:24.438 2.465 - 2.477: 98.1453% ( 12) 00:18:24.438 2.477 - 2.489: 98.2039% ( 8) 00:18:24.438 2.489 - 2.501: 98.2773% ( 10) 00:18:24.438 2.501 - 2.513: 98.3139% ( 5) 00:18:24.438 2.513 - 2.524: 98.3579% ( 6) 00:18:24.438 2.524 - 2.536: 98.3652% ( 1) 00:18:24.438 2.536 - 2.548: 98.3945% ( 4) 00:18:24.438 2.548 - 2.560: 98.4019% ( 1) 00:18:24.438 2.560 - 2.572: 98.4092% ( 1) 00:18:24.438 2.607 - 2.619: 98.4239% ( 2) 00:18:24.438 2.631 - 2.643: 98.4312% ( 1) 00:18:24.438 2.643 - 2.655: 98.4385% ( 1) 00:18:24.438 3.081 - 3.105: 98.4459% ( 1) 00:18:24.438 3.105 - 3.129: 98.4532% ( 1) 00:18:24.438 3.176 - 3.200: 98.4605% ( 1) 00:18:24.438 3.271 - 3.295: 98.4679% ( 1) 00:18:24.438 3.295 - 3.319: 98.4752% ( 1) 00:18:24.438 3.390 - 3.413: 98.4825% ( 1) 00:18:24.438 3.413 - 3.437: 98.4972% ( 2) 00:18:24.438 3.437 - 3.461: 98.5118% ( 2) 00:18:24.438 3.484 - 3.508: 98.5338% ( 3) 00:18:24.438 3.508 - 3.532: 98.5485% ( 2) 00:18:24.438 3.603 - 3.627: 98.5558% ( 1) 00:18:24.438 3.627 - 3.650: 98.5632% ( 1) 00:18:24.438 3.674 - 3.698: 98.5705% ( 1) 00:18:24.438 3.698 - 3.721: 98.5778% ( 1) 00:18:24.438 3.721 - 3.745: 98.5925% ( 2) 00:18:24.438 3.793 - 3.816: 98.5998% ( 1) 00:18:24.438 3.816 - 3.840: 98.6145% ( 2) 00:18:24.438 3.887 - 3.911: 98.6218% ( 1) 00:18:24.438 3.935 - 3.959: 98.6291% ( 1) 00:18:24.438 3.959 - 3.982: 98.6365% ( 1) 00:18:24.438 4.077 - 4.101: 98.6438% ( 1) 00:18:24.438 5.120 - 5.144: 98.6511% ( 1) 00:18:24.438 5.428 - 5.452: 98.6585% ( 1) 00:18:24.438 5.547 - 5.570: 98.6658% ( 1) 00:18:24.438 5.641 - 5.665: 9[2024-07-25 05:38:17.808970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:24.438 8.6731% ( 1) 00:18:24.438 5.713 - 5.736: 98.6951% ( 3) 00:18:24.438 5.760 - 5.784: 98.7024% ( 1) 00:18:24.438 5.879 - 5.902: 98.7171% ( 2) 00:18:24.438 5.902 - 5.926: 98.7244% ( 1) 00:18:24.438 5.926 - 5.950: 98.7318% ( 1) 00:18:24.438 5.950 - 5.973: 98.7464% ( 2) 00:18:24.438 5.973 - 5.997: 98.7611% ( 2) 00:18:24.438 5.997 - 6.021: 98.7684% ( 1) 00:18:24.438 6.068 - 6.116: 98.7757% ( 1) 00:18:24.438 6.210 - 6.258: 98.7831% ( 1) 00:18:24.438 6.590 - 6.637: 98.7904% ( 1) 00:18:24.438 6.684 - 6.732: 98.7977% ( 1) 00:18:24.438 6.779 - 6.827: 98.8051% ( 1) 00:18:24.438 6.827 - 6.874: 98.8124% ( 1) 00:18:24.438 6.969 - 7.016: 98.8197% ( 1) 00:18:24.438 7.064 - 7.111: 98.8271% ( 1) 00:18:24.438 7.111 - 7.159: 98.8344% ( 1) 00:18:24.438 7.348 - 7.396: 98.8417% ( 1) 00:18:24.438 8.012 - 8.059: 98.8491% ( 1) 00:18:24.438 15.360 - 15.455: 98.8564% ( 1) 00:18:24.438 15.834 - 15.929: 98.8857% ( 4) 00:18:24.438 16.024 - 16.119: 98.9150% ( 4) 00:18:24.438 16.119 - 16.213: 98.9664% ( 7) 00:18:24.438 16.213 - 16.308: 98.9883% ( 3) 00:18:24.438 16.308 - 16.403: 99.0103% ( 3) 00:18:24.438 16.403 - 16.498: 99.0910% ( 11) 00:18:24.438 16.498 - 16.593: 99.1350% ( 6) 00:18:24.438 16.593 - 16.687: 99.2009% ( 9) 00:18:24.438 16.687 - 16.782: 99.2376% ( 5) 00:18:24.438 16.782 - 16.877: 99.2596% ( 3) 00:18:24.438 16.877 - 16.972: 99.2816% ( 3) 00:18:24.438 16.972 - 17.067: 99.3182% ( 5) 00:18:24.438 17.067 - 17.161: 99.3329% ( 2) 00:18:24.438 17.161 - 17.256: 99.3476% ( 2) 00:18:24.438 17.351 - 17.446: 99.3549% ( 1) 00:18:24.438 17.541 - 17.636: 99.3695% ( 2) 00:18:24.438 17.920 - 18.015: 99.3769% ( 1) 00:18:24.438 18.015 - 18.110: 99.3842% ( 1) 00:18:24.438 18.110 - 18.204: 99.3915% ( 1) 00:18:24.438 3009.801 - 3021.938: 99.3989% ( 1) 00:18:24.438 3835.070 - 3859.342: 99.4062% ( 1) 00:18:24.438 3980.705 - 4004.978: 99.8534% ( 61) 00:18:24.438 4004.978 - 4029.250: 100.0000% ( 20) 00:18:24.438 00:18:24.438 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:24.438 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:24.438 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:24.438 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:24.438 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:24.438 [ 00:18:24.438 { 00:18:24.438 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:24.438 "subtype": "Discovery", 00:18:24.438 "listen_addresses": [], 00:18:24.438 "allow_any_host": true, 00:18:24.438 "hosts": [] 00:18:24.438 }, 00:18:24.438 { 00:18:24.438 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:24.438 "subtype": "NVMe", 00:18:24.438 "listen_addresses": [ 00:18:24.438 { 00:18:24.438 "trtype": "VFIOUSER", 00:18:24.438 "adrfam": "IPv4", 00:18:24.438 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:24.438 "trsvcid": "0" 00:18:24.438 } 00:18:24.438 ], 00:18:24.438 "allow_any_host": true, 00:18:24.438 "hosts": [], 00:18:24.438 "serial_number": "SPDK1", 00:18:24.438 "model_number": "SPDK bdev Controller", 00:18:24.439 "max_namespaces": 32, 00:18:24.439 "min_cntlid": 1, 00:18:24.439 "max_cntlid": 65519, 00:18:24.439 "namespaces": [ 00:18:24.439 { 00:18:24.439 "nsid": 1, 00:18:24.439 "bdev_name": "Malloc1", 00:18:24.439 "name": "Malloc1", 00:18:24.439 "nguid": "6EBAACFD7B724A1D9BAC3E8514D48ECB", 00:18:24.439 "uuid": "6ebaacfd-7b72-4a1d-9bac-3e8514d48ecb" 00:18:24.439 }, 00:18:24.439 { 00:18:24.439 "nsid": 2, 00:18:24.439 "bdev_name": "Malloc3", 00:18:24.439 "name": "Malloc3", 00:18:24.439 "nguid": "2A267E2151E840B1A1314ED333A44C99", 00:18:24.439 "uuid": "2a267e21-51e8-40b1-a131-4ed333a44c99" 00:18:24.439 } 00:18:24.439 ] 00:18:24.439 }, 00:18:24.439 { 00:18:24.439 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:24.439 "subtype": "NVMe", 00:18:24.439 "listen_addresses": [ 00:18:24.439 { 00:18:24.439 "trtype": "VFIOUSER", 00:18:24.439 "adrfam": "IPv4", 00:18:24.439 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:24.439 "trsvcid": "0" 00:18:24.439 } 00:18:24.439 ], 00:18:24.439 "allow_any_host": true, 00:18:24.439 "hosts": [], 00:18:24.439 "serial_number": "SPDK2", 00:18:24.439 "model_number": "SPDK bdev Controller", 00:18:24.439 "max_namespaces": 32, 00:18:24.439 "min_cntlid": 1, 00:18:24.439 "max_cntlid": 65519, 00:18:24.439 "namespaces": [ 00:18:24.439 { 00:18:24.439 "nsid": 1, 00:18:24.439 "bdev_name": "Malloc2", 00:18:24.439 "name": "Malloc2", 00:18:24.439 "nguid": "5AF2BB2136844B3EA00EFDB90238DB04", 00:18:24.439 "uuid": "5af2bb21-3684-4b3e-a00e-fdb90238db04" 00:18:24.439 } 00:18:24.439 ] 00:18:24.439 } 00:18:24.439 ] 00:18:24.439 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:24.439 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1625200 00:18:24.439 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:24.439 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:24.439 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:24.439 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:24.439 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:24.439 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:24.439 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:24.439 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:24.697 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.697 [2024-07-25 05:38:18.259748] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:24.697 Malloc4 00:18:24.697 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:24.954 [2024-07-25 05:38:18.628463] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:24.954 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:25.212 Asynchronous Event Request test 00:18:25.212 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:25.212 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:25.212 Registering asynchronous event callbacks... 00:18:25.212 Starting namespace attribute notice tests for all controllers... 00:18:25.212 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:25.212 aer_cb - Changed Namespace 00:18:25.212 Cleaning up... 00:18:25.212 [ 00:18:25.212 { 00:18:25.212 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:25.212 "subtype": "Discovery", 00:18:25.212 "listen_addresses": [], 00:18:25.212 "allow_any_host": true, 00:18:25.212 "hosts": [] 00:18:25.212 }, 00:18:25.212 { 00:18:25.212 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:25.212 "subtype": "NVMe", 00:18:25.212 "listen_addresses": [ 00:18:25.212 { 00:18:25.212 "trtype": "VFIOUSER", 00:18:25.212 "adrfam": "IPv4", 00:18:25.212 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:25.212 "trsvcid": "0" 00:18:25.212 } 00:18:25.212 ], 00:18:25.212 "allow_any_host": true, 00:18:25.212 "hosts": [], 00:18:25.212 "serial_number": "SPDK1", 00:18:25.212 "model_number": "SPDK bdev Controller", 00:18:25.212 "max_namespaces": 32, 00:18:25.212 "min_cntlid": 1, 00:18:25.212 "max_cntlid": 65519, 00:18:25.212 "namespaces": [ 00:18:25.212 { 00:18:25.212 "nsid": 1, 00:18:25.212 "bdev_name": "Malloc1", 00:18:25.212 "name": "Malloc1", 00:18:25.212 "nguid": "6EBAACFD7B724A1D9BAC3E8514D48ECB", 00:18:25.212 "uuid": "6ebaacfd-7b72-4a1d-9bac-3e8514d48ecb" 00:18:25.212 }, 00:18:25.212 { 00:18:25.212 "nsid": 2, 00:18:25.212 "bdev_name": "Malloc3", 00:18:25.212 "name": "Malloc3", 00:18:25.212 "nguid": "2A267E2151E840B1A1314ED333A44C99", 00:18:25.212 "uuid": "2a267e21-51e8-40b1-a131-4ed333a44c99" 00:18:25.212 } 00:18:25.212 ] 00:18:25.212 }, 00:18:25.212 { 00:18:25.212 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:25.212 "subtype": "NVMe", 00:18:25.212 "listen_addresses": [ 00:18:25.212 { 00:18:25.212 "trtype": "VFIOUSER", 00:18:25.212 "adrfam": "IPv4", 00:18:25.212 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:25.212 "trsvcid": "0" 00:18:25.212 } 00:18:25.212 ], 00:18:25.212 "allow_any_host": true, 00:18:25.212 "hosts": [], 00:18:25.212 "serial_number": "SPDK2", 00:18:25.212 "model_number": "SPDK bdev Controller", 00:18:25.212 "max_namespaces": 32, 00:18:25.212 "min_cntlid": 1, 00:18:25.212 "max_cntlid": 65519, 00:18:25.212 "namespaces": [ 00:18:25.212 { 00:18:25.212 "nsid": 1, 00:18:25.212 "bdev_name": "Malloc2", 00:18:25.212 "name": "Malloc2", 00:18:25.212 "nguid": "5AF2BB2136844B3EA00EFDB90238DB04", 00:18:25.212 "uuid": "5af2bb21-3684-4b3e-a00e-fdb90238db04" 00:18:25.212 }, 00:18:25.212 { 00:18:25.212 "nsid": 2, 00:18:25.212 "bdev_name": "Malloc4", 00:18:25.212 "name": "Malloc4", 00:18:25.212 "nguid": "83A239CD714846549857314E618A9F9E", 00:18:25.212 "uuid": "83a239cd-7148-4654-9857-314e618a9f9e" 00:18:25.212 } 00:18:25.212 ] 00:18:25.212 } 00:18:25.212 ] 00:18:25.212 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1625200 00:18:25.212 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:25.212 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1619650 00:18:25.212 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1619650 ']' 00:18:25.212 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1619650 00:18:25.212 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:25.212 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:25.212 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1619650 00:18:25.212 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:25.212 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:25.470 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1619650' 00:18:25.470 killing process with pid 1619650 00:18:25.470 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1619650 00:18:25.470 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1619650 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1625347 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1625347' 00:18:25.729 Process pid: 1625347 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1625347 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1625347 ']' 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:25.729 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:25.729 [2024-07-25 05:38:19.271275] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:25.729 [2024-07-25 05:38:19.272260] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:18:25.729 [2024-07-25 05:38:19.272315] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.729 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.729 [2024-07-25 05:38:19.328962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:25.729 [2024-07-25 05:38:19.415538] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.729 [2024-07-25 05:38:19.415592] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.729 [2024-07-25 05:38:19.415620] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.729 [2024-07-25 05:38:19.415632] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.729 [2024-07-25 05:38:19.415642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.729 [2024-07-25 05:38:19.415695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.729 [2024-07-25 05:38:19.415754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.729 [2024-07-25 05:38:19.415818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:25.729 [2024-07-25 05:38:19.415821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.987 [2024-07-25 05:38:19.508119] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:25.987 [2024-07-25 05:38:19.508357] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:25.987 [2024-07-25 05:38:19.508605] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:25.987 [2024-07-25 05:38:19.509153] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:25.987 [2024-07-25 05:38:19.509430] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:25.987 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.987 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:25.987 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:26.918 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:27.177 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:27.177 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:27.177 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:27.177 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:27.177 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:27.435 Malloc1 00:18:27.435 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:27.693 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:27.949 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:28.206 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:28.206 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:28.206 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:28.462 Malloc2 00:18:28.462 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:28.718 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:28.974 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:29.230 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:29.230 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1625347 00:18:29.230 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1625347 ']' 00:18:29.230 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1625347 00:18:29.230 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:29.230 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:29.230 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1625347 00:18:29.230 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:29.230 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:29.230 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1625347' 00:18:29.230 killing process with pid 1625347 00:18:29.230 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1625347 00:18:29.230 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1625347 00:18:29.492 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:29.492 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:29.492 00:18:29.492 real 0m52.265s 00:18:29.492 user 3m26.400s 00:18:29.492 sys 0m4.401s 00:18:29.492 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:29.492 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:29.492 ************************************ 00:18:29.492 END TEST nvmf_vfio_user 00:18:29.492 ************************************ 00:18:29.492 05:38:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:29.492 05:38:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:29.492 05:38:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:29.492 05:38:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:29.492 ************************************ 00:18:29.492 START TEST nvmf_vfio_user_nvme_compliance 00:18:29.492 ************************************ 00:18:29.492 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:29.782 * Looking for test storage... 00:18:29.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:29.782 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1625941 00:18:29.783 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:29.783 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1625941' 00:18:29.783 Process pid: 1625941 00:18:29.783 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:29.783 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1625941 00:18:29.783 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1625941 ']' 00:18:29.783 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.783 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:29.783 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.783 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:29.783 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:29.783 [2024-07-25 05:38:23.306464] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:18:29.783 [2024-07-25 05:38:23.306557] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.783 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.783 [2024-07-25 05:38:23.370357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:29.783 [2024-07-25 05:38:23.462384] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.783 [2024-07-25 05:38:23.462454] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.783 [2024-07-25 05:38:23.462481] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.783 [2024-07-25 05:38:23.462505] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.783 [2024-07-25 05:38:23.462525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.783 [2024-07-25 05:38:23.462612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.783 [2024-07-25 05:38:23.462672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.783 [2024-07-25 05:38:23.462695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.039 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.039 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:18:30.039 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:30.967 malloc0 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.967 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:30.968 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.968 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:31.224 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.224 00:18:31.224 00:18:31.224 CUnit - A unit testing framework for C - Version 2.1-3 00:18:31.224 http://cunit.sourceforge.net/ 00:18:31.224 00:18:31.224 00:18:31.224 Suite: nvme_compliance 00:18:31.224 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 05:38:24.818758] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:31.224 [2024-07-25 05:38:24.820238] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:31.224 [2024-07-25 05:38:24.820270] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:31.224 [2024-07-25 05:38:24.820282] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:31.224 [2024-07-25 05:38:24.821774] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:31.224 passed 00:18:31.224 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 05:38:24.907376] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:31.224 [2024-07-25 05:38:24.910399] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:31.481 passed 00:18:31.481 Test: admin_identify_ns ...[2024-07-25 05:38:25.000816] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:31.481 [2024-07-25 05:38:25.060262] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:31.481 [2024-07-25 05:38:25.068286] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:31.481 [2024-07-25 05:38:25.089371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:31.481 passed 00:18:31.481 Test: admin_get_features_mandatory_features ...[2024-07-25 05:38:25.174950] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:31.481 [2024-07-25 05:38:25.177967] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:31.738 passed 00:18:31.738 Test: admin_get_features_optional_features ...[2024-07-25 05:38:25.264580] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:31.738 [2024-07-25 05:38:25.267598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:31.738 passed 00:18:31.738 Test: admin_set_features_number_of_queues ...[2024-07-25 05:38:25.351819] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:31.995 [2024-07-25 05:38:25.455372] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:31.995 passed 00:18:31.995 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 05:38:25.542599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:31.995 [2024-07-25 05:38:25.545624] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:31.995 passed 00:18:31.995 Test: admin_get_log_page_with_lpo ...[2024-07-25 05:38:25.626843] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:31.995 [2024-07-25 05:38:25.692264] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:32.252 [2024-07-25 05:38:25.708345] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:32.252 passed 00:18:32.252 Test: fabric_property_get ...[2024-07-25 05:38:25.791702] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:32.252 [2024-07-25 05:38:25.792978] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:32.252 [2024-07-25 05:38:25.794727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:32.252 passed 00:18:32.252 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 05:38:25.883287] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:32.252 [2024-07-25 05:38:25.884607] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:32.252 [2024-07-25 05:38:25.886313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:32.252 passed 00:18:32.508 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 05:38:25.969894] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:32.508 [2024-07-25 05:38:26.053265] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:32.508 [2024-07-25 05:38:26.069283] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:32.508 [2024-07-25 05:38:26.074360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:32.508 passed 00:18:32.508 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 05:38:26.159595] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:32.508 [2024-07-25 05:38:26.160916] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:32.508 [2024-07-25 05:38:26.162629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:32.508 passed 00:18:32.765 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 05:38:26.245061] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:32.765 [2024-07-25 05:38:26.321269] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:32.765 [2024-07-25 05:38:26.345253] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:32.765 [2024-07-25 05:38:26.350368] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:32.765 passed 00:18:32.765 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 05:38:26.433961] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:32.765 [2024-07-25 05:38:26.435289] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:32.765 [2024-07-25 05:38:26.435344] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:32.765 [2024-07-25 05:38:26.436984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:33.021 passed 00:18:33.021 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 05:38:26.523328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:33.021 [2024-07-25 05:38:26.616253] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:33.021 [2024-07-25 05:38:26.624251] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:33.022 [2024-07-25 05:38:26.632252] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:33.022 [2024-07-25 05:38:26.640254] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:33.022 [2024-07-25 05:38:26.669364] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:33.022 passed 00:18:33.278 Test: admin_create_io_sq_verify_pc ...[2024-07-25 05:38:26.754612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:33.278 [2024-07-25 05:38:26.771279] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:33.278 [2024-07-25 05:38:26.789002] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:33.278 passed 00:18:33.278 Test: admin_create_io_qp_max_qps ...[2024-07-25 05:38:26.870573] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:34.649 [2024-07-25 05:38:27.963259] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:34.649 [2024-07-25 05:38:28.343632] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:34.907 passed 00:18:34.907 Test: admin_create_io_sq_shared_cq ...[2024-07-25 05:38:28.426813] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:34.907 [2024-07-25 05:38:28.559265] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:34.907 [2024-07-25 05:38:28.596356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:35.165 passed 00:18:35.165 00:18:35.165 Run Summary: Type Total Ran Passed Failed Inactive 00:18:35.165 suites 1 1 n/a 0 0 00:18:35.165 tests 18 18 18 0 0 00:18:35.165 asserts 360 360 360 0 n/a 00:18:35.165 00:18:35.165 Elapsed time = 1.571 seconds 00:18:35.165 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1625941 00:18:35.165 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1625941 ']' 00:18:35.165 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1625941 00:18:35.165 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:18:35.165 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.165 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1625941 00:18:35.165 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:35.165 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:35.165 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1625941' 00:18:35.165 killing process with pid 1625941 00:18:35.165 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1625941 00:18:35.165 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1625941 00:18:35.423 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:35.423 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:35.423 00:18:35.423 real 0m5.727s 00:18:35.423 user 0m16.078s 00:18:35.423 sys 0m0.588s 00:18:35.423 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:35.423 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:35.423 ************************************ 00:18:35.423 END TEST nvmf_vfio_user_nvme_compliance 00:18:35.423 ************************************ 00:18:35.423 05:38:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:35.423 05:38:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:35.423 05:38:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:35.423 05:38:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:35.423 ************************************ 00:18:35.423 START TEST nvmf_vfio_user_fuzz 00:18:35.423 ************************************ 00:18:35.423 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:35.423 * Looking for test storage... 00:18:35.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.423 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.423 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:35.423 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.423 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.423 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.423 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.423 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.423 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.423 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.423 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.423 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.423 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1626661 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1626661' 00:18:35.424 Process pid: 1626661 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1626661 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1626661 ']' 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:35.424 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:35.682 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.682 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:18:35.682 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:37.056 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:37.056 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.056 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:37.056 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.056 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:37.056 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:37.056 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.056 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:37.056 malloc0 00:18:37.056 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.056 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:37.056 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.056 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:37.057 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.057 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:37.057 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.057 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:37.057 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.057 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:37.057 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.057 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:37.057 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.057 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:37.057 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:09.114 Fuzzing completed. Shutting down the fuzz application 00:19:09.114 00:19:09.114 Dumping successful admin opcodes: 00:19:09.114 8, 9, 10, 24, 00:19:09.114 Dumping successful io opcodes: 00:19:09.114 0, 00:19:09.114 NS: 0x200003a1ef00 I/O qp, Total commands completed: 575897, total successful commands: 2216, random_seed: 3821330048 00:19:09.114 NS: 0x200003a1ef00 admin qp, Total commands completed: 87194, total successful commands: 697, random_seed: 1791631104 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1626661 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1626661 ']' 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1626661 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1626661 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1626661' 00:19:09.114 killing process with pid 1626661 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1626661 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1626661 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:09.114 00:19:09.114 real 0m32.367s 00:19:09.114 user 0m30.851s 00:19:09.114 sys 0m28.636s 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:09.114 ************************************ 00:19:09.114 END TEST nvmf_vfio_user_fuzz 00:19:09.114 ************************************ 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:09.114 ************************************ 00:19:09.114 START TEST nvmf_auth_target 00:19:09.114 ************************************ 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:09.114 * Looking for test storage... 00:19:09.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.114 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:09.115 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:09.682 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:09.682 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:09.682 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:09.682 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:09.682 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:09.683 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:09.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:19:09.946 00:19:09.946 --- 10.0.0.2 ping statistics --- 00:19:09.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.946 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:09.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:19:09.946 00:19:09.946 --- 10.0.0.1 ping statistics --- 00:19:09.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.946 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1632207 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1632207 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1632207 ']' 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:09.946 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1632234 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6932c370720af63fc2654d80d516723032c3101ebc6fd308 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.8xj 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6932c370720af63fc2654d80d516723032c3101ebc6fd308 0 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6932c370720af63fc2654d80d516723032c3101ebc6fd308 0 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6932c370720af63fc2654d80d516723032c3101ebc6fd308 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.8xj 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.8xj 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.8xj 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=631409d7cd7675dc79c62c72986cecf99ce600a75b9309deadacca0dec61008e 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Vfu 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 631409d7cd7675dc79c62c72986cecf99ce600a75b9309deadacca0dec61008e 3 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 631409d7cd7675dc79c62c72986cecf99ce600a75b9309deadacca0dec61008e 3 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=631409d7cd7675dc79c62c72986cecf99ce600a75b9309deadacca0dec61008e 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Vfu 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Vfu 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Vfu 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=249e305086a3d7d200b16a5b039ee2a9 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.PST 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 249e305086a3d7d200b16a5b039ee2a9 1 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 249e305086a3d7d200b16a5b039ee2a9 1 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=249e305086a3d7d200b16a5b039ee2a9 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:10.212 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.PST 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.PST 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.PST 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=857f9dcab97d7b8dc91e2f51ac3d512b3433c9cd72288252 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.pCq 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 857f9dcab97d7b8dc91e2f51ac3d512b3433c9cd72288252 2 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 857f9dcab97d7b8dc91e2f51ac3d512b3433c9cd72288252 2 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=857f9dcab97d7b8dc91e2f51ac3d512b3433c9cd72288252 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.pCq 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.pCq 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.pCq 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:10.472 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=42699e7ca0e952fc714ca338289bdc0f3bf78d0ef8561a06 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Tsi 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 42699e7ca0e952fc714ca338289bdc0f3bf78d0ef8561a06 2 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 42699e7ca0e952fc714ca338289bdc0f3bf78d0ef8561a06 2 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=42699e7ca0e952fc714ca338289bdc0f3bf78d0ef8561a06 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Tsi 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Tsi 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Tsi 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=39f8347981fba449bad92e895c806f3e 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.t1D 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 39f8347981fba449bad92e895c806f3e 1 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 39f8347981fba449bad92e895c806f3e 1 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=39f8347981fba449bad92e895c806f3e 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.t1D 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.t1D 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.t1D 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2b43a0764d68ed9fe973dcd27b86ad0cf9aba1da584dec67295c6cb878d2b805 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.VbB 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2b43a0764d68ed9fe973dcd27b86ad0cf9aba1da584dec67295c6cb878d2b805 3 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2b43a0764d68ed9fe973dcd27b86ad0cf9aba1da584dec67295c6cb878d2b805 3 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2b43a0764d68ed9fe973dcd27b86ad0cf9aba1da584dec67295c6cb878d2b805 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.VbB 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.VbB 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.VbB 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1632207 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1632207 ']' 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.472 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.731 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.731 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:10.731 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1632234 /var/tmp/host.sock 00:19:10.731 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1632234 ']' 00:19:10.731 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:10.731 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.731 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:10.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:10.731 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.731 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.989 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.989 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:10.989 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:10.989 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.989 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.247 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.247 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:11.247 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8xj 00:19:11.247 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.247 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.247 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.247 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.8xj 00:19:11.247 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.8xj 00:19:11.504 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Vfu ]] 00:19:11.504 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vfu 00:19:11.504 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.504 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.504 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.504 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vfu 00:19:11.505 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vfu 00:19:11.762 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:11.762 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.PST 00:19:11.762 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.762 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.762 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.762 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.PST 00:19:11.762 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.PST 00:19:12.020 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.pCq ]] 00:19:12.020 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pCq 00:19:12.020 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.020 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.020 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.020 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pCq 00:19:12.020 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pCq 00:19:12.277 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:12.277 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Tsi 00:19:12.277 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.277 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.277 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.277 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Tsi 00:19:12.277 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Tsi 00:19:12.533 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.t1D ]] 00:19:12.533 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t1D 00:19:12.533 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.533 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.533 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.533 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t1D 00:19:12.533 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t1D 00:19:12.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:12.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.VbB 00:19:12.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.VbB 00:19:12.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.VbB 00:19:13.047 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:13.047 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:13.047 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.047 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.047 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:13.047 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:13.304 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:13.304 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.304 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.304 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:13.304 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.304 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.304 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.304 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.304 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.304 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.304 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.304 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.562 00:19:13.562 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.562 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.562 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.819 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.820 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.820 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.820 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.820 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.820 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.820 { 00:19:13.820 "cntlid": 1, 00:19:13.820 "qid": 0, 00:19:13.820 "state": "enabled", 00:19:13.820 "thread": "nvmf_tgt_poll_group_000", 00:19:13.820 "listen_address": { 00:19:13.820 "trtype": "TCP", 00:19:13.820 "adrfam": "IPv4", 00:19:13.820 "traddr": "10.0.0.2", 00:19:13.820 "trsvcid": "4420" 00:19:13.820 }, 00:19:13.820 "peer_address": { 00:19:13.820 "trtype": "TCP", 00:19:13.820 "adrfam": "IPv4", 00:19:13.820 "traddr": "10.0.0.1", 00:19:13.820 "trsvcid": "33554" 00:19:13.820 }, 00:19:13.820 "auth": { 00:19:13.820 "state": "completed", 00:19:13.820 "digest": "sha256", 00:19:13.820 "dhgroup": "null" 00:19:13.820 } 00:19:13.820 } 00:19:13.820 ]' 00:19:13.820 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.820 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.820 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.820 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:13.820 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.077 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.077 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.077 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.334 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:19:15.266 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.266 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.266 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.266 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.266 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.266 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.266 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.266 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.523 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:15.523 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.523 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.523 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:15.523 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.523 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.523 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.523 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.523 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.523 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.523 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.523 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.781 00:19:15.781 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.781 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.781 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.039 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.039 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.039 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.039 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.039 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.039 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.039 { 00:19:16.039 "cntlid": 3, 00:19:16.039 "qid": 0, 00:19:16.039 "state": "enabled", 00:19:16.039 "thread": "nvmf_tgt_poll_group_000", 00:19:16.039 "listen_address": { 00:19:16.039 "trtype": "TCP", 00:19:16.039 "adrfam": "IPv4", 00:19:16.040 "traddr": "10.0.0.2", 00:19:16.040 "trsvcid": "4420" 00:19:16.040 }, 00:19:16.040 "peer_address": { 00:19:16.040 "trtype": "TCP", 00:19:16.040 "adrfam": "IPv4", 00:19:16.040 "traddr": "10.0.0.1", 00:19:16.040 "trsvcid": "33580" 00:19:16.040 }, 00:19:16.040 "auth": { 00:19:16.040 "state": "completed", 00:19:16.040 "digest": "sha256", 00:19:16.040 "dhgroup": "null" 00:19:16.040 } 00:19:16.040 } 00:19:16.040 ]' 00:19:16.040 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.040 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.040 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.040 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:16.040 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.040 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.040 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.040 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.297 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:19:17.229 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.229 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.229 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.229 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.229 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.229 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.229 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:17.229 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:17.486 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:17.486 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.486 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.486 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:17.486 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.486 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.486 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.486 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.486 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.486 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.486 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.486 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.050 00:19:18.050 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.050 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.050 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.050 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.050 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.050 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.050 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.050 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.050 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.050 { 00:19:18.050 "cntlid": 5, 00:19:18.050 "qid": 0, 00:19:18.050 "state": "enabled", 00:19:18.050 "thread": "nvmf_tgt_poll_group_000", 00:19:18.050 "listen_address": { 00:19:18.050 "trtype": "TCP", 00:19:18.050 "adrfam": "IPv4", 00:19:18.050 "traddr": "10.0.0.2", 00:19:18.050 "trsvcid": "4420" 00:19:18.050 }, 00:19:18.050 "peer_address": { 00:19:18.050 "trtype": "TCP", 00:19:18.050 "adrfam": "IPv4", 00:19:18.050 "traddr": "10.0.0.1", 00:19:18.050 "trsvcid": "37308" 00:19:18.050 }, 00:19:18.050 "auth": { 00:19:18.050 "state": "completed", 00:19:18.050 "digest": "sha256", 00:19:18.050 "dhgroup": "null" 00:19:18.050 } 00:19:18.050 } 00:19:18.050 ]' 00:19:18.050 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.308 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.308 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.308 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:18.308 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.308 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.308 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.308 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.566 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:19:19.499 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.499 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.499 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.499 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.499 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.499 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.499 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:19.499 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:19.757 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:19.757 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.757 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.757 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:19.757 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.757 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.757 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:19.757 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.757 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.757 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.757 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.757 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.014 00:19:20.014 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.014 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.014 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.272 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.272 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.272 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.272 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.272 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.272 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.272 { 00:19:20.272 "cntlid": 7, 00:19:20.272 "qid": 0, 00:19:20.272 "state": "enabled", 00:19:20.272 "thread": "nvmf_tgt_poll_group_000", 00:19:20.272 "listen_address": { 00:19:20.272 "trtype": "TCP", 00:19:20.272 "adrfam": "IPv4", 00:19:20.272 "traddr": "10.0.0.2", 00:19:20.272 "trsvcid": "4420" 00:19:20.272 }, 00:19:20.272 "peer_address": { 00:19:20.272 "trtype": "TCP", 00:19:20.272 "adrfam": "IPv4", 00:19:20.272 "traddr": "10.0.0.1", 00:19:20.272 "trsvcid": "37344" 00:19:20.272 }, 00:19:20.272 "auth": { 00:19:20.272 "state": "completed", 00:19:20.272 "digest": "sha256", 00:19:20.272 "dhgroup": "null" 00:19:20.272 } 00:19:20.272 } 00:19:20.272 ]' 00:19:20.272 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.272 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.272 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.272 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:20.272 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.529 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.529 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.529 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.529 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.901 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.158 00:19:22.158 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.158 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.158 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.415 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.415 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.415 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.415 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.415 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.415 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.415 { 00:19:22.415 "cntlid": 9, 00:19:22.415 "qid": 0, 00:19:22.415 "state": "enabled", 00:19:22.415 "thread": "nvmf_tgt_poll_group_000", 00:19:22.415 "listen_address": { 00:19:22.415 "trtype": "TCP", 00:19:22.415 "adrfam": "IPv4", 00:19:22.415 "traddr": "10.0.0.2", 00:19:22.415 "trsvcid": "4420" 00:19:22.415 }, 00:19:22.415 "peer_address": { 00:19:22.415 "trtype": "TCP", 00:19:22.416 "adrfam": "IPv4", 00:19:22.416 "traddr": "10.0.0.1", 00:19:22.416 "trsvcid": "37366" 00:19:22.416 }, 00:19:22.416 "auth": { 00:19:22.416 "state": "completed", 00:19:22.416 "digest": "sha256", 00:19:22.416 "dhgroup": "ffdhe2048" 00:19:22.416 } 00:19:22.416 } 00:19:22.416 ]' 00:19:22.416 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.416 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.416 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.673 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:22.673 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.673 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.673 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.673 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.930 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:19:23.913 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.913 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.913 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.913 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.913 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.913 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.913 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:23.913 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.171 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:24.171 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.171 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.171 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:24.171 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.171 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.171 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.171 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.171 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.171 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.171 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.171 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.429 00:19:24.429 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.429 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.429 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.687 { 00:19:24.687 "cntlid": 11, 00:19:24.687 "qid": 0, 00:19:24.687 "state": "enabled", 00:19:24.687 "thread": "nvmf_tgt_poll_group_000", 00:19:24.687 "listen_address": { 00:19:24.687 "trtype": "TCP", 00:19:24.687 "adrfam": "IPv4", 00:19:24.687 "traddr": "10.0.0.2", 00:19:24.687 "trsvcid": "4420" 00:19:24.687 }, 00:19:24.687 "peer_address": { 00:19:24.687 "trtype": "TCP", 00:19:24.687 "adrfam": "IPv4", 00:19:24.687 "traddr": "10.0.0.1", 00:19:24.687 "trsvcid": "37388" 00:19:24.687 }, 00:19:24.687 "auth": { 00:19:24.687 "state": "completed", 00:19:24.687 "digest": "sha256", 00:19:24.687 "dhgroup": "ffdhe2048" 00:19:24.687 } 00:19:24.687 } 00:19:24.687 ]' 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.687 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.945 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:19:25.878 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.878 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.878 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.878 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.878 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.878 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.878 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:25.879 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:26.137 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:26.137 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.137 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.137 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:26.137 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.137 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.137 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.137 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.137 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.137 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.137 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.137 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.703 00:19:26.703 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.703 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.703 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.703 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.703 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.703 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.703 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.961 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.961 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.961 { 00:19:26.961 "cntlid": 13, 00:19:26.961 "qid": 0, 00:19:26.961 "state": "enabled", 00:19:26.961 "thread": "nvmf_tgt_poll_group_000", 00:19:26.961 "listen_address": { 00:19:26.961 "trtype": "TCP", 00:19:26.961 "adrfam": "IPv4", 00:19:26.961 "traddr": "10.0.0.2", 00:19:26.961 "trsvcid": "4420" 00:19:26.961 }, 00:19:26.961 "peer_address": { 00:19:26.961 "trtype": "TCP", 00:19:26.961 "adrfam": "IPv4", 00:19:26.961 "traddr": "10.0.0.1", 00:19:26.961 "trsvcid": "52032" 00:19:26.961 }, 00:19:26.961 "auth": { 00:19:26.961 "state": "completed", 00:19:26.961 "digest": "sha256", 00:19:26.961 "dhgroup": "ffdhe2048" 00:19:26.961 } 00:19:26.961 } 00:19:26.961 ]' 00:19:26.961 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.961 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.961 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.961 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.961 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.961 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.961 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.961 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.218 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:19:28.148 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.148 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.148 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.148 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.148 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.148 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.148 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:28.148 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:28.406 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:28.406 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.406 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.406 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:28.406 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.406 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.406 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:28.406 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.406 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.406 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.406 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.406 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.664 00:19:28.664 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.664 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.664 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.922 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.922 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.922 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.922 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.922 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.922 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.922 { 00:19:28.922 "cntlid": 15, 00:19:28.922 "qid": 0, 00:19:28.922 "state": "enabled", 00:19:28.922 "thread": "nvmf_tgt_poll_group_000", 00:19:28.922 "listen_address": { 00:19:28.922 "trtype": "TCP", 00:19:28.922 "adrfam": "IPv4", 00:19:28.922 "traddr": "10.0.0.2", 00:19:28.922 "trsvcid": "4420" 00:19:28.922 }, 00:19:28.922 "peer_address": { 00:19:28.922 "trtype": "TCP", 00:19:28.922 "adrfam": "IPv4", 00:19:28.922 "traddr": "10.0.0.1", 00:19:28.922 "trsvcid": "52062" 00:19:28.922 }, 00:19:28.922 "auth": { 00:19:28.922 "state": "completed", 00:19:28.922 "digest": "sha256", 00:19:28.922 "dhgroup": "ffdhe2048" 00:19:28.922 } 00:19:28.922 } 00:19:28.922 ]' 00:19:28.922 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.922 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.922 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.922 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:28.922 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.180 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.180 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.180 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.438 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:19:30.369 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.369 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.369 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.369 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.369 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.369 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.369 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.369 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.369 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.627 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:30.627 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.627 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.627 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:30.627 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:30.627 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.627 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.627 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.627 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.627 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.627 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.627 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.884 00:19:30.884 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.884 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.884 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.142 { 00:19:31.142 "cntlid": 17, 00:19:31.142 "qid": 0, 00:19:31.142 "state": "enabled", 00:19:31.142 "thread": "nvmf_tgt_poll_group_000", 00:19:31.142 "listen_address": { 00:19:31.142 "trtype": "TCP", 00:19:31.142 "adrfam": "IPv4", 00:19:31.142 "traddr": "10.0.0.2", 00:19:31.142 "trsvcid": "4420" 00:19:31.142 }, 00:19:31.142 "peer_address": { 00:19:31.142 "trtype": "TCP", 00:19:31.142 "adrfam": "IPv4", 00:19:31.142 "traddr": "10.0.0.1", 00:19:31.142 "trsvcid": "52074" 00:19:31.142 }, 00:19:31.142 "auth": { 00:19:31.142 "state": "completed", 00:19:31.142 "digest": "sha256", 00:19:31.142 "dhgroup": "ffdhe3072" 00:19:31.142 } 00:19:31.142 } 00:19:31.142 ]' 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.142 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.400 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:19:32.773 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.773 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.773 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.773 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.773 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.773 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.773 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:32.773 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:32.773 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:32.774 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.774 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.774 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:32.774 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:32.774 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.774 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.774 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.774 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.774 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.774 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.774 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.031 00:19:33.031 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.031 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.031 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.289 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.289 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.289 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.289 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.289 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.289 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.289 { 00:19:33.289 "cntlid": 19, 00:19:33.289 "qid": 0, 00:19:33.289 "state": "enabled", 00:19:33.289 "thread": "nvmf_tgt_poll_group_000", 00:19:33.289 "listen_address": { 00:19:33.289 "trtype": "TCP", 00:19:33.289 "adrfam": "IPv4", 00:19:33.289 "traddr": "10.0.0.2", 00:19:33.289 "trsvcid": "4420" 00:19:33.289 }, 00:19:33.289 "peer_address": { 00:19:33.289 "trtype": "TCP", 00:19:33.289 "adrfam": "IPv4", 00:19:33.289 "traddr": "10.0.0.1", 00:19:33.289 "trsvcid": "52102" 00:19:33.289 }, 00:19:33.289 "auth": { 00:19:33.289 "state": "completed", 00:19:33.289 "digest": "sha256", 00:19:33.289 "dhgroup": "ffdhe3072" 00:19:33.289 } 00:19:33.289 } 00:19:33.289 ]' 00:19:33.289 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.289 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.289 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.547 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:33.547 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.547 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.547 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.547 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.805 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:19:34.737 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.737 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.737 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.737 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.737 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.737 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.737 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.737 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.994 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:34.994 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.994 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.994 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:34.994 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:34.994 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.994 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.995 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.995 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.995 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.995 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.995 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.252 00:19:35.252 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.252 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.252 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.510 { 00:19:35.510 "cntlid": 21, 00:19:35.510 "qid": 0, 00:19:35.510 "state": "enabled", 00:19:35.510 "thread": "nvmf_tgt_poll_group_000", 00:19:35.510 "listen_address": { 00:19:35.510 "trtype": "TCP", 00:19:35.510 "adrfam": "IPv4", 00:19:35.510 "traddr": "10.0.0.2", 00:19:35.510 "trsvcid": "4420" 00:19:35.510 }, 00:19:35.510 "peer_address": { 00:19:35.510 "trtype": "TCP", 00:19:35.510 "adrfam": "IPv4", 00:19:35.510 "traddr": "10.0.0.1", 00:19:35.510 "trsvcid": "52114" 00:19:35.510 }, 00:19:35.510 "auth": { 00:19:35.510 "state": "completed", 00:19:35.510 "digest": "sha256", 00:19:35.510 "dhgroup": "ffdhe3072" 00:19:35.510 } 00:19:35.510 } 00:19:35.510 ]' 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.510 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.076 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.008 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.582 00:19:37.582 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.582 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.582 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.582 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.582 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.582 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.582 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.582 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.582 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.582 { 00:19:37.582 "cntlid": 23, 00:19:37.582 "qid": 0, 00:19:37.582 "state": "enabled", 00:19:37.582 "thread": "nvmf_tgt_poll_group_000", 00:19:37.582 "listen_address": { 00:19:37.582 "trtype": "TCP", 00:19:37.582 "adrfam": "IPv4", 00:19:37.582 "traddr": "10.0.0.2", 00:19:37.582 "trsvcid": "4420" 00:19:37.582 }, 00:19:37.582 "peer_address": { 00:19:37.582 "trtype": "TCP", 00:19:37.582 "adrfam": "IPv4", 00:19:37.582 "traddr": "10.0.0.1", 00:19:37.582 "trsvcid": "47244" 00:19:37.582 }, 00:19:37.582 "auth": { 00:19:37.582 "state": "completed", 00:19:37.582 "digest": "sha256", 00:19:37.582 "dhgroup": "ffdhe3072" 00:19:37.582 } 00:19:37.582 } 00:19:37.582 ]' 00:19:37.582 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.850 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.850 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.850 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:37.850 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.850 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.850 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.850 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.107 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:19:39.039 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.040 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.040 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.040 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.040 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.040 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.040 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.040 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.040 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.297 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:39.297 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.297 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.297 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:39.297 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.297 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.297 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.297 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.297 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.297 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.297 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.297 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.861 00:19:39.861 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.861 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.861 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.861 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.861 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.861 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.861 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.861 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.861 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.861 { 00:19:39.861 "cntlid": 25, 00:19:39.861 "qid": 0, 00:19:39.861 "state": "enabled", 00:19:39.861 "thread": "nvmf_tgt_poll_group_000", 00:19:39.861 "listen_address": { 00:19:39.861 "trtype": "TCP", 00:19:39.861 "adrfam": "IPv4", 00:19:39.861 "traddr": "10.0.0.2", 00:19:39.861 "trsvcid": "4420" 00:19:39.861 }, 00:19:39.861 "peer_address": { 00:19:39.861 "trtype": "TCP", 00:19:39.861 "adrfam": "IPv4", 00:19:39.861 "traddr": "10.0.0.1", 00:19:39.861 "trsvcid": "47260" 00:19:39.861 }, 00:19:39.861 "auth": { 00:19:39.861 "state": "completed", 00:19:39.861 "digest": "sha256", 00:19:39.861 "dhgroup": "ffdhe4096" 00:19:39.861 } 00:19:39.861 } 00:19:39.861 ]' 00:19:39.861 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.118 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.118 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.118 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.118 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.118 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.118 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.118 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.376 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:19:41.308 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.308 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.308 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.308 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.308 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.308 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.308 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.308 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.565 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:41.565 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.565 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:41.565 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:41.565 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.565 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.565 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.565 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.565 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.565 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.565 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.566 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.823 00:19:41.823 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.823 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.823 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.080 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.080 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.080 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.080 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.080 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.080 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.080 { 00:19:42.080 "cntlid": 27, 00:19:42.080 "qid": 0, 00:19:42.080 "state": "enabled", 00:19:42.080 "thread": "nvmf_tgt_poll_group_000", 00:19:42.080 "listen_address": { 00:19:42.080 "trtype": "TCP", 00:19:42.080 "adrfam": "IPv4", 00:19:42.080 "traddr": "10.0.0.2", 00:19:42.080 "trsvcid": "4420" 00:19:42.080 }, 00:19:42.081 "peer_address": { 00:19:42.081 "trtype": "TCP", 00:19:42.081 "adrfam": "IPv4", 00:19:42.081 "traddr": "10.0.0.1", 00:19:42.081 "trsvcid": "47294" 00:19:42.081 }, 00:19:42.081 "auth": { 00:19:42.081 "state": "completed", 00:19:42.081 "digest": "sha256", 00:19:42.081 "dhgroup": "ffdhe4096" 00:19:42.081 } 00:19:42.081 } 00:19:42.081 ]' 00:19:42.081 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.338 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.338 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.338 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.338 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.338 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.338 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.338 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.596 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:19:43.530 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.530 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.530 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.530 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.530 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.530 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.530 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:43.530 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:43.788 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:43.788 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.788 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.788 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:43.788 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:43.788 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.788 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.788 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.788 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.788 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.789 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.789 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.047 00:19:44.047 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.047 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.047 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.305 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.305 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.305 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.305 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.305 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.305 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.305 { 00:19:44.305 "cntlid": 29, 00:19:44.305 "qid": 0, 00:19:44.305 "state": "enabled", 00:19:44.305 "thread": "nvmf_tgt_poll_group_000", 00:19:44.305 "listen_address": { 00:19:44.305 "trtype": "TCP", 00:19:44.305 "adrfam": "IPv4", 00:19:44.305 "traddr": "10.0.0.2", 00:19:44.305 "trsvcid": "4420" 00:19:44.305 }, 00:19:44.305 "peer_address": { 00:19:44.305 "trtype": "TCP", 00:19:44.305 "adrfam": "IPv4", 00:19:44.305 "traddr": "10.0.0.1", 00:19:44.305 "trsvcid": "47318" 00:19:44.305 }, 00:19:44.305 "auth": { 00:19:44.305 "state": "completed", 00:19:44.305 "digest": "sha256", 00:19:44.305 "dhgroup": "ffdhe4096" 00:19:44.305 } 00:19:44.305 } 00:19:44.305 ]' 00:19:44.305 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.563 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.563 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.563 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.563 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.563 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.563 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.563 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.821 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:19:45.755 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.755 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.755 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.755 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.755 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.755 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.755 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.755 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.013 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:46.013 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.013 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.013 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:46.013 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.013 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.013 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:46.013 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.013 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.013 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.013 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.013 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.579 00:19:46.579 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.579 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.580 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.580 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.580 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.580 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.580 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.580 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.580 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.580 { 00:19:46.580 "cntlid": 31, 00:19:46.580 "qid": 0, 00:19:46.580 "state": "enabled", 00:19:46.580 "thread": "nvmf_tgt_poll_group_000", 00:19:46.580 "listen_address": { 00:19:46.580 "trtype": "TCP", 00:19:46.580 "adrfam": "IPv4", 00:19:46.580 "traddr": "10.0.0.2", 00:19:46.580 "trsvcid": "4420" 00:19:46.580 }, 00:19:46.580 "peer_address": { 00:19:46.580 "trtype": "TCP", 00:19:46.580 "adrfam": "IPv4", 00:19:46.580 "traddr": "10.0.0.1", 00:19:46.580 "trsvcid": "44196" 00:19:46.580 }, 00:19:46.580 "auth": { 00:19:46.580 "state": "completed", 00:19:46.580 "digest": "sha256", 00:19:46.580 "dhgroup": "ffdhe4096" 00:19:46.580 } 00:19:46.580 } 00:19:46.580 ]' 00:19:46.580 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.838 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.838 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.838 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:46.838 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.838 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.838 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.838 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.095 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:19:48.029 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.029 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.029 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.029 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.029 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.029 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.029 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.029 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.029 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.287 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:48.287 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.287 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:48.287 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:48.287 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.287 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.287 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.287 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.287 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.287 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.287 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.287 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.852 00:19:48.852 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.852 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.852 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.109 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.109 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.109 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.109 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.109 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.109 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.110 { 00:19:49.110 "cntlid": 33, 00:19:49.110 "qid": 0, 00:19:49.110 "state": "enabled", 00:19:49.110 "thread": "nvmf_tgt_poll_group_000", 00:19:49.110 "listen_address": { 00:19:49.110 "trtype": "TCP", 00:19:49.110 "adrfam": "IPv4", 00:19:49.110 "traddr": "10.0.0.2", 00:19:49.110 "trsvcid": "4420" 00:19:49.110 }, 00:19:49.110 "peer_address": { 00:19:49.110 "trtype": "TCP", 00:19:49.110 "adrfam": "IPv4", 00:19:49.110 "traddr": "10.0.0.1", 00:19:49.110 "trsvcid": "44206" 00:19:49.110 }, 00:19:49.110 "auth": { 00:19:49.110 "state": "completed", 00:19:49.110 "digest": "sha256", 00:19:49.110 "dhgroup": "ffdhe6144" 00:19:49.110 } 00:19:49.110 } 00:19:49.110 ]' 00:19:49.110 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.110 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.110 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.110 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.110 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.366 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.366 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.366 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.623 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:19:50.554 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.554 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.554 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.554 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.554 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.554 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.554 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:50.554 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:50.813 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:50.813 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.813 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.813 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:50.813 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.813 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.813 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.813 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.813 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.813 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.813 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.813 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.408 00:19:51.408 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.408 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.408 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.667 { 00:19:51.667 "cntlid": 35, 00:19:51.667 "qid": 0, 00:19:51.667 "state": "enabled", 00:19:51.667 "thread": "nvmf_tgt_poll_group_000", 00:19:51.667 "listen_address": { 00:19:51.667 "trtype": "TCP", 00:19:51.667 "adrfam": "IPv4", 00:19:51.667 "traddr": "10.0.0.2", 00:19:51.667 "trsvcid": "4420" 00:19:51.667 }, 00:19:51.667 "peer_address": { 00:19:51.667 "trtype": "TCP", 00:19:51.667 "adrfam": "IPv4", 00:19:51.667 "traddr": "10.0.0.1", 00:19:51.667 "trsvcid": "44238" 00:19:51.667 }, 00:19:51.667 "auth": { 00:19:51.667 "state": "completed", 00:19:51.667 "digest": "sha256", 00:19:51.667 "dhgroup": "ffdhe6144" 00:19:51.667 } 00:19:51.667 } 00:19:51.667 ]' 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.667 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.925 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:19:52.859 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.374 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:53.374 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.374 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.374 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:53.374 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.374 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.374 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.374 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.374 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.374 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.374 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.374 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.941 00:19:53.941 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.941 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.941 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.941 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.941 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.941 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.941 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.941 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.941 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.941 { 00:19:53.941 "cntlid": 37, 00:19:53.941 "qid": 0, 00:19:53.941 "state": "enabled", 00:19:53.941 "thread": "nvmf_tgt_poll_group_000", 00:19:53.941 "listen_address": { 00:19:53.941 "trtype": "TCP", 00:19:53.941 "adrfam": "IPv4", 00:19:53.941 "traddr": "10.0.0.2", 00:19:53.941 "trsvcid": "4420" 00:19:53.941 }, 00:19:53.941 "peer_address": { 00:19:53.941 "trtype": "TCP", 00:19:53.941 "adrfam": "IPv4", 00:19:53.941 "traddr": "10.0.0.1", 00:19:53.941 "trsvcid": "44266" 00:19:53.941 }, 00:19:53.941 "auth": { 00:19:53.941 "state": "completed", 00:19:53.941 "digest": "sha256", 00:19:53.941 "dhgroup": "ffdhe6144" 00:19:53.941 } 00:19:53.941 } 00:19:53.941 ]' 00:19:54.199 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.199 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.199 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.199 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:54.199 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.199 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.199 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.199 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.457 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:19:55.393 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.393 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.393 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.393 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.393 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.393 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.393 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:55.393 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:55.651 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:55.651 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.651 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.651 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:55.651 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.651 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.651 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:55.651 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.651 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.651 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.651 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.651 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.217 00:19:56.475 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.475 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.475 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.733 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.733 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.733 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.733 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.733 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.733 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.733 { 00:19:56.733 "cntlid": 39, 00:19:56.733 "qid": 0, 00:19:56.733 "state": "enabled", 00:19:56.733 "thread": "nvmf_tgt_poll_group_000", 00:19:56.733 "listen_address": { 00:19:56.733 "trtype": "TCP", 00:19:56.734 "adrfam": "IPv4", 00:19:56.734 "traddr": "10.0.0.2", 00:19:56.734 "trsvcid": "4420" 00:19:56.734 }, 00:19:56.734 "peer_address": { 00:19:56.734 "trtype": "TCP", 00:19:56.734 "adrfam": "IPv4", 00:19:56.734 "traddr": "10.0.0.1", 00:19:56.734 "trsvcid": "58756" 00:19:56.734 }, 00:19:56.734 "auth": { 00:19:56.734 "state": "completed", 00:19:56.734 "digest": "sha256", 00:19:56.734 "dhgroup": "ffdhe6144" 00:19:56.734 } 00:19:56.734 } 00:19:56.734 ]' 00:19:56.734 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.734 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.734 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.734 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:56.734 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.734 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.734 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.734 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.992 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:19:57.925 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.925 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.925 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.925 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.925 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.925 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.925 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.925 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.925 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.183 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:58.183 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.183 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:58.183 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:58.183 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:58.183 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.183 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.183 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.183 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.183 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.183 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.183 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.114 00:19:59.114 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.114 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.114 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.371 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.371 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.371 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.372 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.372 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.372 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.372 { 00:19:59.372 "cntlid": 41, 00:19:59.372 "qid": 0, 00:19:59.372 "state": "enabled", 00:19:59.372 "thread": "nvmf_tgt_poll_group_000", 00:19:59.372 "listen_address": { 00:19:59.372 "trtype": "TCP", 00:19:59.372 "adrfam": "IPv4", 00:19:59.372 "traddr": "10.0.0.2", 00:19:59.372 "trsvcid": "4420" 00:19:59.372 }, 00:19:59.372 "peer_address": { 00:19:59.372 "trtype": "TCP", 00:19:59.372 "adrfam": "IPv4", 00:19:59.372 "traddr": "10.0.0.1", 00:19:59.372 "trsvcid": "58780" 00:19:59.372 }, 00:19:59.372 "auth": { 00:19:59.372 "state": "completed", 00:19:59.372 "digest": "sha256", 00:19:59.372 "dhgroup": "ffdhe8192" 00:19:59.372 } 00:19:59.372 } 00:19:59.372 ]' 00:19:59.372 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.372 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.372 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.372 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.372 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.372 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.372 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.372 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.936 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:20:00.867 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.867 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.867 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.867 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.867 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.867 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.867 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.867 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.125 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:01.125 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.125 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.125 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:01.125 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:01.125 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.125 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.125 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.125 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.125 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.125 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.125 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.058 00:20:02.058 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.058 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.058 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.058 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.058 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.058 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.058 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.058 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.058 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.058 { 00:20:02.058 "cntlid": 43, 00:20:02.058 "qid": 0, 00:20:02.058 "state": "enabled", 00:20:02.058 "thread": "nvmf_tgt_poll_group_000", 00:20:02.058 "listen_address": { 00:20:02.058 "trtype": "TCP", 00:20:02.058 "adrfam": "IPv4", 00:20:02.058 "traddr": "10.0.0.2", 00:20:02.058 "trsvcid": "4420" 00:20:02.058 }, 00:20:02.058 "peer_address": { 00:20:02.058 "trtype": "TCP", 00:20:02.058 "adrfam": "IPv4", 00:20:02.058 "traddr": "10.0.0.1", 00:20:02.058 "trsvcid": "58804" 00:20:02.058 }, 00:20:02.058 "auth": { 00:20:02.058 "state": "completed", 00:20:02.058 "digest": "sha256", 00:20:02.058 "dhgroup": "ffdhe8192" 00:20:02.058 } 00:20:02.058 } 00:20:02.058 ]' 00:20:02.058 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.315 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.315 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.315 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.315 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.315 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.315 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.315 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.573 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:20:03.506 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.506 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.506 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.506 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.506 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.506 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.506 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.506 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.764 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:03.764 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.764 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:03.764 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:03.764 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:03.764 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.764 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.764 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.764 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.764 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.764 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.765 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.698 00:20:04.698 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.698 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.698 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.956 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.956 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.956 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.956 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.956 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.956 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.956 { 00:20:04.956 "cntlid": 45, 00:20:04.956 "qid": 0, 00:20:04.956 "state": "enabled", 00:20:04.956 "thread": "nvmf_tgt_poll_group_000", 00:20:04.956 "listen_address": { 00:20:04.956 "trtype": "TCP", 00:20:04.956 "adrfam": "IPv4", 00:20:04.956 "traddr": "10.0.0.2", 00:20:04.956 "trsvcid": "4420" 00:20:04.956 }, 00:20:04.956 "peer_address": { 00:20:04.956 "trtype": "TCP", 00:20:04.956 "adrfam": "IPv4", 00:20:04.956 "traddr": "10.0.0.1", 00:20:04.956 "trsvcid": "58828" 00:20:04.956 }, 00:20:04.956 "auth": { 00:20:04.956 "state": "completed", 00:20:04.956 "digest": "sha256", 00:20:04.956 "dhgroup": "ffdhe8192" 00:20:04.956 } 00:20:04.956 } 00:20:04.956 ]' 00:20:04.956 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.214 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.214 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.214 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.214 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.214 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.214 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.214 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.504 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:20:06.438 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.438 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.438 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.438 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.438 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.438 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.438 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.438 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.696 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:06.696 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.696 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:06.696 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:06.696 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:06.696 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.696 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:06.696 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.696 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.696 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.696 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.696 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.629 00:20:07.629 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.629 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.629 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.886 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.886 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.886 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.886 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.886 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.886 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.886 { 00:20:07.886 "cntlid": 47, 00:20:07.886 "qid": 0, 00:20:07.886 "state": "enabled", 00:20:07.886 "thread": "nvmf_tgt_poll_group_000", 00:20:07.886 "listen_address": { 00:20:07.886 "trtype": "TCP", 00:20:07.886 "adrfam": "IPv4", 00:20:07.886 "traddr": "10.0.0.2", 00:20:07.886 "trsvcid": "4420" 00:20:07.886 }, 00:20:07.886 "peer_address": { 00:20:07.886 "trtype": "TCP", 00:20:07.886 "adrfam": "IPv4", 00:20:07.886 "traddr": "10.0.0.1", 00:20:07.886 "trsvcid": "55804" 00:20:07.886 }, 00:20:07.886 "auth": { 00:20:07.886 "state": "completed", 00:20:07.886 "digest": "sha256", 00:20:07.886 "dhgroup": "ffdhe8192" 00:20:07.886 } 00:20:07.886 } 00:20:07.886 ]' 00:20:07.886 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.886 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.886 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.886 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.886 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.144 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.144 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.144 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.401 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:20:09.333 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.333 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.333 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.333 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.333 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.333 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:09.333 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.333 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.333 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.333 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.592 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:09.592 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.592 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.592 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:09.592 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:09.592 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.592 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.592 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.592 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.592 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.592 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.592 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.850 00:20:09.850 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.850 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.850 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.106 { 00:20:10.106 "cntlid": 49, 00:20:10.106 "qid": 0, 00:20:10.106 "state": "enabled", 00:20:10.106 "thread": "nvmf_tgt_poll_group_000", 00:20:10.106 "listen_address": { 00:20:10.106 "trtype": "TCP", 00:20:10.106 "adrfam": "IPv4", 00:20:10.106 "traddr": "10.0.0.2", 00:20:10.106 "trsvcid": "4420" 00:20:10.106 }, 00:20:10.106 "peer_address": { 00:20:10.106 "trtype": "TCP", 00:20:10.106 "adrfam": "IPv4", 00:20:10.106 "traddr": "10.0.0.1", 00:20:10.106 "trsvcid": "55832" 00:20:10.106 }, 00:20:10.106 "auth": { 00:20:10.106 "state": "completed", 00:20:10.106 "digest": "sha384", 00:20:10.106 "dhgroup": "null" 00:20:10.106 } 00:20:10.106 } 00:20:10.106 ]' 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.106 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.362 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:20:11.295 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.553 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.553 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.553 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.553 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.553 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.553 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.553 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.811 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:11.811 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.811 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.811 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:11.811 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:11.811 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.811 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.811 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.811 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.811 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.811 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.811 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.069 00:20:12.069 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.069 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.069 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.327 { 00:20:12.327 "cntlid": 51, 00:20:12.327 "qid": 0, 00:20:12.327 "state": "enabled", 00:20:12.327 "thread": "nvmf_tgt_poll_group_000", 00:20:12.327 "listen_address": { 00:20:12.327 "trtype": "TCP", 00:20:12.327 "adrfam": "IPv4", 00:20:12.327 "traddr": "10.0.0.2", 00:20:12.327 "trsvcid": "4420" 00:20:12.327 }, 00:20:12.327 "peer_address": { 00:20:12.327 "trtype": "TCP", 00:20:12.327 "adrfam": "IPv4", 00:20:12.327 "traddr": "10.0.0.1", 00:20:12.327 "trsvcid": "55840" 00:20:12.327 }, 00:20:12.327 "auth": { 00:20:12.327 "state": "completed", 00:20:12.327 "digest": "sha384", 00:20:12.327 "dhgroup": "null" 00:20:12.327 } 00:20:12.327 } 00:20:12.327 ]' 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.327 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.585 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:20:13.520 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.520 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.520 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.520 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.520 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.520 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.520 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:13.520 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:13.778 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:13.778 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.778 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.778 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:13.778 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:13.778 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.778 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.778 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.778 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.778 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.779 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.779 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.037 00:20:14.037 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.037 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.037 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.295 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.295 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.295 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.295 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.295 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.295 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.295 { 00:20:14.295 "cntlid": 53, 00:20:14.295 "qid": 0, 00:20:14.295 "state": "enabled", 00:20:14.295 "thread": "nvmf_tgt_poll_group_000", 00:20:14.295 "listen_address": { 00:20:14.295 "trtype": "TCP", 00:20:14.295 "adrfam": "IPv4", 00:20:14.295 "traddr": "10.0.0.2", 00:20:14.295 "trsvcid": "4420" 00:20:14.295 }, 00:20:14.295 "peer_address": { 00:20:14.295 "trtype": "TCP", 00:20:14.295 "adrfam": "IPv4", 00:20:14.295 "traddr": "10.0.0.1", 00:20:14.295 "trsvcid": "55866" 00:20:14.295 }, 00:20:14.295 "auth": { 00:20:14.295 "state": "completed", 00:20:14.295 "digest": "sha384", 00:20:14.295 "dhgroup": "null" 00:20:14.295 } 00:20:14.295 } 00:20:14.295 ]' 00:20:14.295 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.553 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.553 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.553 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:14.553 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.553 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.553 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.553 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.812 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:20:15.746 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.746 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.746 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.746 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.746 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.746 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.746 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:15.746 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.005 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:16.005 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.005 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.005 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:16.005 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:16.005 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.005 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:16.005 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.005 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.005 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.005 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.005 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.263 00:20:16.263 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.263 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.263 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.522 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.522 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.522 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.522 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.522 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.522 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.522 { 00:20:16.522 "cntlid": 55, 00:20:16.522 "qid": 0, 00:20:16.522 "state": "enabled", 00:20:16.522 "thread": "nvmf_tgt_poll_group_000", 00:20:16.522 "listen_address": { 00:20:16.522 "trtype": "TCP", 00:20:16.522 "adrfam": "IPv4", 00:20:16.522 "traddr": "10.0.0.2", 00:20:16.522 "trsvcid": "4420" 00:20:16.522 }, 00:20:16.522 "peer_address": { 00:20:16.522 "trtype": "TCP", 00:20:16.522 "adrfam": "IPv4", 00:20:16.522 "traddr": "10.0.0.1", 00:20:16.522 "trsvcid": "57580" 00:20:16.522 }, 00:20:16.522 "auth": { 00:20:16.522 "state": "completed", 00:20:16.522 "digest": "sha384", 00:20:16.522 "dhgroup": "null" 00:20:16.522 } 00:20:16.522 } 00:20:16.522 ]' 00:20:16.522 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.522 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.522 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.780 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:16.780 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.780 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.780 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.780 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.037 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:20:17.972 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.972 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.972 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.972 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.972 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.972 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.972 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.972 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.972 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.231 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:18.231 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.231 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.231 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:18.231 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:18.231 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.231 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.231 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.231 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.231 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.231 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.231 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.489 00:20:18.489 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.489 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.489 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.747 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.747 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.747 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.747 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.747 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.747 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.747 { 00:20:18.747 "cntlid": 57, 00:20:18.747 "qid": 0, 00:20:18.747 "state": "enabled", 00:20:18.747 "thread": "nvmf_tgt_poll_group_000", 00:20:18.747 "listen_address": { 00:20:18.747 "trtype": "TCP", 00:20:18.747 "adrfam": "IPv4", 00:20:18.747 "traddr": "10.0.0.2", 00:20:18.747 "trsvcid": "4420" 00:20:18.747 }, 00:20:18.747 "peer_address": { 00:20:18.747 "trtype": "TCP", 00:20:18.747 "adrfam": "IPv4", 00:20:18.747 "traddr": "10.0.0.1", 00:20:18.747 "trsvcid": "57614" 00:20:18.747 }, 00:20:18.747 "auth": { 00:20:18.747 "state": "completed", 00:20:18.747 "digest": "sha384", 00:20:18.747 "dhgroup": "ffdhe2048" 00:20:18.747 } 00:20:18.747 } 00:20:18.747 ]' 00:20:18.747 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.747 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.747 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.005 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.005 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.005 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.005 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.005 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.296 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:20:20.228 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.228 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.228 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.228 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.228 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.228 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.228 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.228 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.485 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:20.485 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.485 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.485 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:20.485 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:20.485 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.485 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.485 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.485 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.485 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.485 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.485 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.742 00:20:20.742 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.742 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.742 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.000 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.000 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.000 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.000 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.000 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.000 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.000 { 00:20:21.000 "cntlid": 59, 00:20:21.000 "qid": 0, 00:20:21.000 "state": "enabled", 00:20:21.000 "thread": "nvmf_tgt_poll_group_000", 00:20:21.000 "listen_address": { 00:20:21.000 "trtype": "TCP", 00:20:21.000 "adrfam": "IPv4", 00:20:21.000 "traddr": "10.0.0.2", 00:20:21.000 "trsvcid": "4420" 00:20:21.000 }, 00:20:21.000 "peer_address": { 00:20:21.000 "trtype": "TCP", 00:20:21.000 "adrfam": "IPv4", 00:20:21.000 "traddr": "10.0.0.1", 00:20:21.000 "trsvcid": "57652" 00:20:21.000 }, 00:20:21.000 "auth": { 00:20:21.000 "state": "completed", 00:20:21.000 "digest": "sha384", 00:20:21.000 "dhgroup": "ffdhe2048" 00:20:21.000 } 00:20:21.000 } 00:20:21.000 ]' 00:20:21.000 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.000 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.000 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.257 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.257 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.257 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.257 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.257 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.515 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:20:22.447 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.447 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.447 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.447 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.447 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.448 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.448 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.448 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.705 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:22.705 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.705 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.705 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:22.705 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:22.705 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.705 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.705 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.705 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.705 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.705 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.705 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.270 00:20:23.270 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.270 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.270 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.270 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.270 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.270 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.270 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.270 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.270 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.270 { 00:20:23.270 "cntlid": 61, 00:20:23.270 "qid": 0, 00:20:23.270 "state": "enabled", 00:20:23.270 "thread": "nvmf_tgt_poll_group_000", 00:20:23.270 "listen_address": { 00:20:23.270 "trtype": "TCP", 00:20:23.270 "adrfam": "IPv4", 00:20:23.270 "traddr": "10.0.0.2", 00:20:23.270 "trsvcid": "4420" 00:20:23.270 }, 00:20:23.270 "peer_address": { 00:20:23.270 "trtype": "TCP", 00:20:23.270 "adrfam": "IPv4", 00:20:23.270 "traddr": "10.0.0.1", 00:20:23.270 "trsvcid": "57684" 00:20:23.270 }, 00:20:23.270 "auth": { 00:20:23.270 "state": "completed", 00:20:23.270 "digest": "sha384", 00:20:23.270 "dhgroup": "ffdhe2048" 00:20:23.270 } 00:20:23.270 } 00:20:23.270 ]' 00:20:23.270 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.528 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.528 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.528 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.528 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.528 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.528 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.528 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.786 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:20:24.728 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.728 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.728 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.728 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.728 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.728 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.728 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.728 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.985 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:24.985 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.985 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:24.985 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:24.985 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:24.985 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.985 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:24.985 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.985 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.985 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.985 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.985 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.549 00:20:25.549 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.549 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.549 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.549 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.549 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.549 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.549 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.549 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.550 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.550 { 00:20:25.550 "cntlid": 63, 00:20:25.550 "qid": 0, 00:20:25.550 "state": "enabled", 00:20:25.550 "thread": "nvmf_tgt_poll_group_000", 00:20:25.550 "listen_address": { 00:20:25.550 "trtype": "TCP", 00:20:25.550 "adrfam": "IPv4", 00:20:25.550 "traddr": "10.0.0.2", 00:20:25.550 "trsvcid": "4420" 00:20:25.550 }, 00:20:25.550 "peer_address": { 00:20:25.550 "trtype": "TCP", 00:20:25.550 "adrfam": "IPv4", 00:20:25.550 "traddr": "10.0.0.1", 00:20:25.550 "trsvcid": "57722" 00:20:25.550 }, 00:20:25.550 "auth": { 00:20:25.550 "state": "completed", 00:20:25.550 "digest": "sha384", 00:20:25.550 "dhgroup": "ffdhe2048" 00:20:25.550 } 00:20:25.550 } 00:20:25.550 ]' 00:20:25.550 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.807 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.807 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.807 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:25.807 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.807 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.807 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.807 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.064 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:20:26.997 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.997 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.997 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.997 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.997 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.997 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.997 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.997 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.997 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.255 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:27.255 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.255 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.255 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:27.255 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.255 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.255 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.255 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.255 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.255 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.255 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.255 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.513 00:20:27.771 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.771 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.771 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.029 { 00:20:28.029 "cntlid": 65, 00:20:28.029 "qid": 0, 00:20:28.029 "state": "enabled", 00:20:28.029 "thread": "nvmf_tgt_poll_group_000", 00:20:28.029 "listen_address": { 00:20:28.029 "trtype": "TCP", 00:20:28.029 "adrfam": "IPv4", 00:20:28.029 "traddr": "10.0.0.2", 00:20:28.029 "trsvcid": "4420" 00:20:28.029 }, 00:20:28.029 "peer_address": { 00:20:28.029 "trtype": "TCP", 00:20:28.029 "adrfam": "IPv4", 00:20:28.029 "traddr": "10.0.0.1", 00:20:28.029 "trsvcid": "51916" 00:20:28.029 }, 00:20:28.029 "auth": { 00:20:28.029 "state": "completed", 00:20:28.029 "digest": "sha384", 00:20:28.029 "dhgroup": "ffdhe3072" 00:20:28.029 } 00:20:28.029 } 00:20:28.029 ]' 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.029 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.288 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:20:29.221 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.221 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.221 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.221 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.221 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.221 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.221 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.221 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.479 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:29.479 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.479 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.479 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:29.479 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.479 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.479 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.479 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.479 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.479 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.479 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.479 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.045 00:20:30.045 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.045 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.046 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.046 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.046 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.046 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.046 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.304 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.304 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.304 { 00:20:30.304 "cntlid": 67, 00:20:30.304 "qid": 0, 00:20:30.304 "state": "enabled", 00:20:30.304 "thread": "nvmf_tgt_poll_group_000", 00:20:30.304 "listen_address": { 00:20:30.304 "trtype": "TCP", 00:20:30.304 "adrfam": "IPv4", 00:20:30.304 "traddr": "10.0.0.2", 00:20:30.304 "trsvcid": "4420" 00:20:30.304 }, 00:20:30.304 "peer_address": { 00:20:30.304 "trtype": "TCP", 00:20:30.304 "adrfam": "IPv4", 00:20:30.304 "traddr": "10.0.0.1", 00:20:30.304 "trsvcid": "51938" 00:20:30.304 }, 00:20:30.304 "auth": { 00:20:30.304 "state": "completed", 00:20:30.304 "digest": "sha384", 00:20:30.304 "dhgroup": "ffdhe3072" 00:20:30.304 } 00:20:30.304 } 00:20:30.304 ]' 00:20:30.304 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.304 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.304 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.304 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.304 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.304 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.304 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.304 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.562 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:20:31.496 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.496 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.496 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.496 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.496 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.496 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.496 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.496 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.755 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:31.755 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.755 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.755 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:31.755 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:31.755 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.755 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.755 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.755 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.755 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.755 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.755 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.320 00:20:32.321 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.321 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.321 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.579 { 00:20:32.579 "cntlid": 69, 00:20:32.579 "qid": 0, 00:20:32.579 "state": "enabled", 00:20:32.579 "thread": "nvmf_tgt_poll_group_000", 00:20:32.579 "listen_address": { 00:20:32.579 "trtype": "TCP", 00:20:32.579 "adrfam": "IPv4", 00:20:32.579 "traddr": "10.0.0.2", 00:20:32.579 "trsvcid": "4420" 00:20:32.579 }, 00:20:32.579 "peer_address": { 00:20:32.579 "trtype": "TCP", 00:20:32.579 "adrfam": "IPv4", 00:20:32.579 "traddr": "10.0.0.1", 00:20:32.579 "trsvcid": "51962" 00:20:32.579 }, 00:20:32.579 "auth": { 00:20:32.579 "state": "completed", 00:20:32.579 "digest": "sha384", 00:20:32.579 "dhgroup": "ffdhe3072" 00:20:32.579 } 00:20:32.579 } 00:20:32.579 ]' 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.579 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.839 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:20:33.808 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.808 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.808 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.808 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.808 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.808 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.808 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.808 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.066 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:34.066 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.066 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.066 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:34.066 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:34.066 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.066 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:34.066 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.066 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.066 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.066 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.066 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.632 00:20:34.632 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.632 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.633 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.633 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.633 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.633 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.633 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.891 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.891 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.891 { 00:20:34.891 "cntlid": 71, 00:20:34.891 "qid": 0, 00:20:34.891 "state": "enabled", 00:20:34.891 "thread": "nvmf_tgt_poll_group_000", 00:20:34.891 "listen_address": { 00:20:34.891 "trtype": "TCP", 00:20:34.891 "adrfam": "IPv4", 00:20:34.891 "traddr": "10.0.0.2", 00:20:34.891 "trsvcid": "4420" 00:20:34.891 }, 00:20:34.891 "peer_address": { 00:20:34.891 "trtype": "TCP", 00:20:34.891 "adrfam": "IPv4", 00:20:34.891 "traddr": "10.0.0.1", 00:20:34.891 "trsvcid": "51994" 00:20:34.891 }, 00:20:34.891 "auth": { 00:20:34.891 "state": "completed", 00:20:34.891 "digest": "sha384", 00:20:34.891 "dhgroup": "ffdhe3072" 00:20:34.891 } 00:20:34.891 } 00:20:34.891 ]' 00:20:34.891 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.891 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.891 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.891 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.891 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.891 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.891 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.891 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.149 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:20:36.083 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.083 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.083 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.083 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.083 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.083 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.083 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.083 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.083 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.341 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:36.341 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.341 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.341 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:36.341 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:36.341 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.341 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.341 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.341 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.341 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.341 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.342 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.907 00:20:36.907 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.907 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.907 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.165 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.165 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.166 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.166 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.166 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.166 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.166 { 00:20:37.166 "cntlid": 73, 00:20:37.166 "qid": 0, 00:20:37.166 "state": "enabled", 00:20:37.166 "thread": "nvmf_tgt_poll_group_000", 00:20:37.166 "listen_address": { 00:20:37.166 "trtype": "TCP", 00:20:37.166 "adrfam": "IPv4", 00:20:37.166 "traddr": "10.0.0.2", 00:20:37.166 "trsvcid": "4420" 00:20:37.166 }, 00:20:37.166 "peer_address": { 00:20:37.166 "trtype": "TCP", 00:20:37.166 "adrfam": "IPv4", 00:20:37.166 "traddr": "10.0.0.1", 00:20:37.166 "trsvcid": "46704" 00:20:37.166 }, 00:20:37.166 "auth": { 00:20:37.166 "state": "completed", 00:20:37.166 "digest": "sha384", 00:20:37.166 "dhgroup": "ffdhe4096" 00:20:37.166 } 00:20:37.166 } 00:20:37.166 ]' 00:20:37.166 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.166 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.166 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.166 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.166 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.166 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.166 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.166 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.424 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.798 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.056 00:20:39.056 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.056 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.056 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.314 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.314 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.314 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.314 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.314 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.314 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.314 { 00:20:39.314 "cntlid": 75, 00:20:39.314 "qid": 0, 00:20:39.314 "state": "enabled", 00:20:39.314 "thread": "nvmf_tgt_poll_group_000", 00:20:39.314 "listen_address": { 00:20:39.314 "trtype": "TCP", 00:20:39.314 "adrfam": "IPv4", 00:20:39.314 "traddr": "10.0.0.2", 00:20:39.314 "trsvcid": "4420" 00:20:39.314 }, 00:20:39.314 "peer_address": { 00:20:39.314 "trtype": "TCP", 00:20:39.314 "adrfam": "IPv4", 00:20:39.314 "traddr": "10.0.0.1", 00:20:39.314 "trsvcid": "46724" 00:20:39.314 }, 00:20:39.314 "auth": { 00:20:39.314 "state": "completed", 00:20:39.314 "digest": "sha384", 00:20:39.314 "dhgroup": "ffdhe4096" 00:20:39.314 } 00:20:39.314 } 00:20:39.314 ]' 00:20:39.572 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.572 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.572 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.572 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.572 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.572 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.572 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.572 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.833 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:20:40.764 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.764 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.764 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.764 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.764 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.764 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.764 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.764 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.021 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:41.021 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.022 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.022 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:41.022 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:41.022 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.022 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.022 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.022 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.022 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.022 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.022 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.587 00:20:41.587 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.587 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.587 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.845 { 00:20:41.845 "cntlid": 77, 00:20:41.845 "qid": 0, 00:20:41.845 "state": "enabled", 00:20:41.845 "thread": "nvmf_tgt_poll_group_000", 00:20:41.845 "listen_address": { 00:20:41.845 "trtype": "TCP", 00:20:41.845 "adrfam": "IPv4", 00:20:41.845 "traddr": "10.0.0.2", 00:20:41.845 "trsvcid": "4420" 00:20:41.845 }, 00:20:41.845 "peer_address": { 00:20:41.845 "trtype": "TCP", 00:20:41.845 "adrfam": "IPv4", 00:20:41.845 "traddr": "10.0.0.1", 00:20:41.845 "trsvcid": "46750" 00:20:41.845 }, 00:20:41.845 "auth": { 00:20:41.845 "state": "completed", 00:20:41.845 "digest": "sha384", 00:20:41.845 "dhgroup": "ffdhe4096" 00:20:41.845 } 00:20:41.845 } 00:20:41.845 ]' 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.845 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.103 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:20:43.037 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.037 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.037 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.037 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.037 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.037 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.037 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.037 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.295 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:43.295 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.295 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.295 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:43.295 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.295 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.295 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:43.295 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.295 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.295 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.295 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.295 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.861 00:20:43.861 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.861 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.861 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.861 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.861 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.861 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.861 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.118 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.118 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.118 { 00:20:44.118 "cntlid": 79, 00:20:44.118 "qid": 0, 00:20:44.118 "state": "enabled", 00:20:44.118 "thread": "nvmf_tgt_poll_group_000", 00:20:44.118 "listen_address": { 00:20:44.118 "trtype": "TCP", 00:20:44.118 "adrfam": "IPv4", 00:20:44.118 "traddr": "10.0.0.2", 00:20:44.118 "trsvcid": "4420" 00:20:44.118 }, 00:20:44.118 "peer_address": { 00:20:44.118 "trtype": "TCP", 00:20:44.118 "adrfam": "IPv4", 00:20:44.118 "traddr": "10.0.0.1", 00:20:44.118 "trsvcid": "46774" 00:20:44.118 }, 00:20:44.118 "auth": { 00:20:44.118 "state": "completed", 00:20:44.118 "digest": "sha384", 00:20:44.118 "dhgroup": "ffdhe4096" 00:20:44.118 } 00:20:44.118 } 00:20:44.118 ]' 00:20:44.118 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.118 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.118 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.118 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.118 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.118 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.118 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.118 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.374 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:20:45.307 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.307 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.307 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.307 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.307 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.307 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.307 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.307 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.307 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.564 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:45.564 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.564 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.564 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:45.564 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:45.564 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.564 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.564 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.564 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.564 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.564 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.565 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.130 00:20:46.130 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.130 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.130 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.388 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.388 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.388 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.388 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.388 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.388 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.388 { 00:20:46.388 "cntlid": 81, 00:20:46.388 "qid": 0, 00:20:46.388 "state": "enabled", 00:20:46.388 "thread": "nvmf_tgt_poll_group_000", 00:20:46.388 "listen_address": { 00:20:46.388 "trtype": "TCP", 00:20:46.388 "adrfam": "IPv4", 00:20:46.388 "traddr": "10.0.0.2", 00:20:46.388 "trsvcid": "4420" 00:20:46.388 }, 00:20:46.388 "peer_address": { 00:20:46.388 "trtype": "TCP", 00:20:46.388 "adrfam": "IPv4", 00:20:46.388 "traddr": "10.0.0.1", 00:20:46.388 "trsvcid": "34088" 00:20:46.388 }, 00:20:46.388 "auth": { 00:20:46.388 "state": "completed", 00:20:46.388 "digest": "sha384", 00:20:46.388 "dhgroup": "ffdhe6144" 00:20:46.388 } 00:20:46.388 } 00:20:46.388 ]' 00:20:46.388 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.645 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.645 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.645 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.645 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.645 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.645 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.645 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.904 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:20:47.841 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.841 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.841 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.841 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.841 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.841 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.841 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.841 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.098 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:48.098 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.098 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.098 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:48.098 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:48.098 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.098 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.098 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.098 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.098 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.098 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.098 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.663 00:20:48.663 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.663 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.663 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.921 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.921 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.921 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.921 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.921 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.921 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.921 { 00:20:48.921 "cntlid": 83, 00:20:48.921 "qid": 0, 00:20:48.921 "state": "enabled", 00:20:48.921 "thread": "nvmf_tgt_poll_group_000", 00:20:48.921 "listen_address": { 00:20:48.921 "trtype": "TCP", 00:20:48.921 "adrfam": "IPv4", 00:20:48.921 "traddr": "10.0.0.2", 00:20:48.921 "trsvcid": "4420" 00:20:48.921 }, 00:20:48.921 "peer_address": { 00:20:48.921 "trtype": "TCP", 00:20:48.921 "adrfam": "IPv4", 00:20:48.921 "traddr": "10.0.0.1", 00:20:48.921 "trsvcid": "34122" 00:20:48.921 }, 00:20:48.921 "auth": { 00:20:48.921 "state": "completed", 00:20:48.921 "digest": "sha384", 00:20:48.921 "dhgroup": "ffdhe6144" 00:20:48.921 } 00:20:48.921 } 00:20:48.921 ]' 00:20:48.921 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.921 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.921 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.921 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:48.921 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.179 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.179 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.179 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.437 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:20:50.371 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.371 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.371 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.371 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.371 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.371 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.371 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.371 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.629 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:50.629 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.629 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:50.629 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:50.629 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:50.629 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.629 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.629 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.629 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.629 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.629 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.629 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.195 00:20:51.195 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.195 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.195 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.453 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.453 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.453 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.453 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.453 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.453 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.453 { 00:20:51.453 "cntlid": 85, 00:20:51.453 "qid": 0, 00:20:51.453 "state": "enabled", 00:20:51.453 "thread": "nvmf_tgt_poll_group_000", 00:20:51.453 "listen_address": { 00:20:51.453 "trtype": "TCP", 00:20:51.453 "adrfam": "IPv4", 00:20:51.453 "traddr": "10.0.0.2", 00:20:51.453 "trsvcid": "4420" 00:20:51.453 }, 00:20:51.453 "peer_address": { 00:20:51.453 "trtype": "TCP", 00:20:51.453 "adrfam": "IPv4", 00:20:51.453 "traddr": "10.0.0.1", 00:20:51.453 "trsvcid": "34144" 00:20:51.453 }, 00:20:51.453 "auth": { 00:20:51.453 "state": "completed", 00:20:51.453 "digest": "sha384", 00:20:51.453 "dhgroup": "ffdhe6144" 00:20:51.453 } 00:20:51.453 } 00:20:51.453 ]' 00:20:51.453 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.453 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.453 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.453 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.453 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.453 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.453 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.453 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.711 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:20:52.645 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.645 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.645 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.645 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.645 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.645 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.645 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:52.645 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.211 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:53.211 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.211 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.211 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:53.211 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:53.211 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.211 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:53.211 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.211 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.211 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.211 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.211 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.469 00:20:53.469 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.469 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.469 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.727 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.727 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.727 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.727 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.727 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.727 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.727 { 00:20:53.727 "cntlid": 87, 00:20:53.727 "qid": 0, 00:20:53.727 "state": "enabled", 00:20:53.727 "thread": "nvmf_tgt_poll_group_000", 00:20:53.727 "listen_address": { 00:20:53.727 "trtype": "TCP", 00:20:53.727 "adrfam": "IPv4", 00:20:53.727 "traddr": "10.0.0.2", 00:20:53.727 "trsvcid": "4420" 00:20:53.727 }, 00:20:53.727 "peer_address": { 00:20:53.727 "trtype": "TCP", 00:20:53.727 "adrfam": "IPv4", 00:20:53.727 "traddr": "10.0.0.1", 00:20:53.727 "trsvcid": "34172" 00:20:53.727 }, 00:20:53.727 "auth": { 00:20:53.727 "state": "completed", 00:20:53.727 "digest": "sha384", 00:20:53.727 "dhgroup": "ffdhe6144" 00:20:53.727 } 00:20:53.727 } 00:20:53.728 ]' 00:20:53.728 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.985 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.985 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.985 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:53.985 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.985 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.986 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.986 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.243 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:20:55.178 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.178 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.178 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.178 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.178 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.178 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.178 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.178 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.178 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.436 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:55.436 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.436 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.436 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:55.436 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:55.436 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.436 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.436 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.436 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.436 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.436 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.436 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.369 00:20:56.369 05:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.369 05:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.369 05:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.627 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.627 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.627 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.627 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.627 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.627 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.627 { 00:20:56.627 "cntlid": 89, 00:20:56.627 "qid": 0, 00:20:56.627 "state": "enabled", 00:20:56.627 "thread": "nvmf_tgt_poll_group_000", 00:20:56.627 "listen_address": { 00:20:56.627 "trtype": "TCP", 00:20:56.627 "adrfam": "IPv4", 00:20:56.627 "traddr": "10.0.0.2", 00:20:56.627 "trsvcid": "4420" 00:20:56.627 }, 00:20:56.627 "peer_address": { 00:20:56.627 "trtype": "TCP", 00:20:56.627 "adrfam": "IPv4", 00:20:56.627 "traddr": "10.0.0.1", 00:20:56.627 "trsvcid": "34200" 00:20:56.627 }, 00:20:56.627 "auth": { 00:20:56.627 "state": "completed", 00:20:56.627 "digest": "sha384", 00:20:56.627 "dhgroup": "ffdhe8192" 00:20:56.627 } 00:20:56.627 } 00:20:56.627 ]' 00:20:56.627 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.627 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.628 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.628 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.628 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.628 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.628 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.628 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.193 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:20:58.122 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.122 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.122 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.122 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.122 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.122 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.122 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.122 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.380 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:58.380 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.380 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.380 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:58.380 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:58.380 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.380 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.380 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.380 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.380 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.380 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.380 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.312 00:20:59.312 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.312 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.312 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.312 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.569 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.569 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.569 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.569 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.569 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.569 { 00:20:59.569 "cntlid": 91, 00:20:59.569 "qid": 0, 00:20:59.569 "state": "enabled", 00:20:59.569 "thread": "nvmf_tgt_poll_group_000", 00:20:59.569 "listen_address": { 00:20:59.569 "trtype": "TCP", 00:20:59.569 "adrfam": "IPv4", 00:20:59.569 "traddr": "10.0.0.2", 00:20:59.569 "trsvcid": "4420" 00:20:59.569 }, 00:20:59.569 "peer_address": { 00:20:59.569 "trtype": "TCP", 00:20:59.569 "adrfam": "IPv4", 00:20:59.569 "traddr": "10.0.0.1", 00:20:59.569 "trsvcid": "49918" 00:20:59.569 }, 00:20:59.569 "auth": { 00:20:59.569 "state": "completed", 00:20:59.569 "digest": "sha384", 00:20:59.569 "dhgroup": "ffdhe8192" 00:20:59.570 } 00:20:59.570 } 00:20:59.570 ]' 00:20:59.570 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.570 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.570 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.570 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.570 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.570 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.570 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.570 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.827 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:21:00.758 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.758 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.758 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.758 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.758 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.758 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.758 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.758 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.019 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:01.019 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.019 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.019 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:01.019 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:01.019 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.019 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.019 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.019 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.316 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.316 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.316 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.249 00:21:02.249 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.249 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.249 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.249 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.249 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.250 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.250 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.250 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.250 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.250 { 00:21:02.250 "cntlid": 93, 00:21:02.250 "qid": 0, 00:21:02.250 "state": "enabled", 00:21:02.250 "thread": "nvmf_tgt_poll_group_000", 00:21:02.250 "listen_address": { 00:21:02.250 "trtype": "TCP", 00:21:02.250 "adrfam": "IPv4", 00:21:02.250 "traddr": "10.0.0.2", 00:21:02.250 "trsvcid": "4420" 00:21:02.250 }, 00:21:02.250 "peer_address": { 00:21:02.250 "trtype": "TCP", 00:21:02.250 "adrfam": "IPv4", 00:21:02.250 "traddr": "10.0.0.1", 00:21:02.250 "trsvcid": "49946" 00:21:02.250 }, 00:21:02.250 "auth": { 00:21:02.250 "state": "completed", 00:21:02.250 "digest": "sha384", 00:21:02.250 "dhgroup": "ffdhe8192" 00:21:02.250 } 00:21:02.250 } 00:21:02.250 ]' 00:21:02.250 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.250 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.250 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.508 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.508 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.508 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.508 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.508 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.764 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:21:03.696 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.696 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.696 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.696 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.696 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.696 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.697 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.697 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.954 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:03.954 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.954 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.954 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:03.954 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:03.954 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.954 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:03.954 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.954 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.954 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.954 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:03.954 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:04.887 00:21:04.887 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.887 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.887 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.887 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.887 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.887 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.887 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.887 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.887 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.887 { 00:21:04.887 "cntlid": 95, 00:21:04.887 "qid": 0, 00:21:04.887 "state": "enabled", 00:21:04.887 "thread": "nvmf_tgt_poll_group_000", 00:21:04.887 "listen_address": { 00:21:04.887 "trtype": "TCP", 00:21:04.887 "adrfam": "IPv4", 00:21:04.887 "traddr": "10.0.0.2", 00:21:04.887 "trsvcid": "4420" 00:21:04.887 }, 00:21:04.887 "peer_address": { 00:21:04.887 "trtype": "TCP", 00:21:04.887 "adrfam": "IPv4", 00:21:04.887 "traddr": "10.0.0.1", 00:21:04.887 "trsvcid": "49974" 00:21:04.887 }, 00:21:04.887 "auth": { 00:21:04.887 "state": "completed", 00:21:04.887 "digest": "sha384", 00:21:04.887 "dhgroup": "ffdhe8192" 00:21:04.887 } 00:21:04.887 } 00:21:04.887 ]' 00:21:04.887 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.144 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.144 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.144 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:05.144 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.144 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.144 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.144 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.402 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:21:06.332 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.332 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.332 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.332 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.332 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.332 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:06.332 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.332 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.332 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.332 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.589 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:06.589 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.589 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.589 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:06.589 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:06.589 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.589 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.589 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.589 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.589 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.589 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.589 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.847 00:21:06.847 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.847 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.847 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.104 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.104 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.104 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.104 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.104 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.104 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.104 { 00:21:07.104 "cntlid": 97, 00:21:07.104 "qid": 0, 00:21:07.104 "state": "enabled", 00:21:07.104 "thread": "nvmf_tgt_poll_group_000", 00:21:07.104 "listen_address": { 00:21:07.104 "trtype": "TCP", 00:21:07.104 "adrfam": "IPv4", 00:21:07.104 "traddr": "10.0.0.2", 00:21:07.104 "trsvcid": "4420" 00:21:07.104 }, 00:21:07.104 "peer_address": { 00:21:07.104 "trtype": "TCP", 00:21:07.104 "adrfam": "IPv4", 00:21:07.104 "traddr": "10.0.0.1", 00:21:07.104 "trsvcid": "51556" 00:21:07.104 }, 00:21:07.104 "auth": { 00:21:07.104 "state": "completed", 00:21:07.104 "digest": "sha512", 00:21:07.104 "dhgroup": "null" 00:21:07.104 } 00:21:07.104 } 00:21:07.104 ]' 00:21:07.104 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.104 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.104 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.362 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:07.362 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.362 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.362 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.362 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.619 05:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:21:08.551 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.551 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.551 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.551 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.551 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.551 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.551 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.551 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.808 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:08.808 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.808 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.808 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:08.808 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:08.808 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.808 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.808 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.808 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.808 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.808 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.808 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.065 00:21:09.065 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.065 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.065 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.323 { 00:21:09.323 "cntlid": 99, 00:21:09.323 "qid": 0, 00:21:09.323 "state": "enabled", 00:21:09.323 "thread": "nvmf_tgt_poll_group_000", 00:21:09.323 "listen_address": { 00:21:09.323 "trtype": "TCP", 00:21:09.323 "adrfam": "IPv4", 00:21:09.323 "traddr": "10.0.0.2", 00:21:09.323 "trsvcid": "4420" 00:21:09.323 }, 00:21:09.323 "peer_address": { 00:21:09.323 "trtype": "TCP", 00:21:09.323 "adrfam": "IPv4", 00:21:09.323 "traddr": "10.0.0.1", 00:21:09.323 "trsvcid": "51576" 00:21:09.323 }, 00:21:09.323 "auth": { 00:21:09.323 "state": "completed", 00:21:09.323 "digest": "sha512", 00:21:09.323 "dhgroup": "null" 00:21:09.323 } 00:21:09.323 } 00:21:09.323 ]' 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.323 05:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.581 05:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:21:10.515 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.515 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.515 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.515 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.515 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.515 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.515 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.515 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.772 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:10.772 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.772 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:10.772 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:10.772 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:10.772 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.772 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.772 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.772 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.772 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.772 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.772 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.338 00:21:11.338 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.338 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.338 05:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.338 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.338 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.338 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.338 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.338 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.338 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.338 { 00:21:11.338 "cntlid": 101, 00:21:11.338 "qid": 0, 00:21:11.338 "state": "enabled", 00:21:11.338 "thread": "nvmf_tgt_poll_group_000", 00:21:11.338 "listen_address": { 00:21:11.338 "trtype": "TCP", 00:21:11.338 "adrfam": "IPv4", 00:21:11.338 "traddr": "10.0.0.2", 00:21:11.338 "trsvcid": "4420" 00:21:11.338 }, 00:21:11.338 "peer_address": { 00:21:11.338 "trtype": "TCP", 00:21:11.338 "adrfam": "IPv4", 00:21:11.338 "traddr": "10.0.0.1", 00:21:11.338 "trsvcid": "51590" 00:21:11.338 }, 00:21:11.338 "auth": { 00:21:11.338 "state": "completed", 00:21:11.338 "digest": "sha512", 00:21:11.338 "dhgroup": "null" 00:21:11.338 } 00:21:11.338 } 00:21:11.338 ]' 00:21:11.338 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.596 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.596 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.596 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:11.596 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.596 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.596 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.596 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.853 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:21:12.786 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.786 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.786 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.786 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.786 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.786 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.786 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:12.786 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:13.044 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:13.044 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.044 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.044 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:13.044 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:13.044 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.044 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:13.044 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.044 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.044 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.044 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.044 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.302 00:21:13.302 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.302 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.302 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.560 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.560 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.560 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.560 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.560 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.560 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.560 { 00:21:13.560 "cntlid": 103, 00:21:13.560 "qid": 0, 00:21:13.560 "state": "enabled", 00:21:13.560 "thread": "nvmf_tgt_poll_group_000", 00:21:13.560 "listen_address": { 00:21:13.560 "trtype": "TCP", 00:21:13.560 "adrfam": "IPv4", 00:21:13.560 "traddr": "10.0.0.2", 00:21:13.560 "trsvcid": "4420" 00:21:13.560 }, 00:21:13.560 "peer_address": { 00:21:13.560 "trtype": "TCP", 00:21:13.560 "adrfam": "IPv4", 00:21:13.560 "traddr": "10.0.0.1", 00:21:13.560 "trsvcid": "51614" 00:21:13.560 }, 00:21:13.560 "auth": { 00:21:13.560 "state": "completed", 00:21:13.560 "digest": "sha512", 00:21:13.560 "dhgroup": "null" 00:21:13.560 } 00:21:13.560 } 00:21:13.560 ]' 00:21:13.560 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.560 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.560 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.560 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:13.560 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.818 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.818 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.818 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.076 05:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:21:15.046 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.046 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.046 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.046 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.046 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.046 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.046 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.046 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.046 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.308 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:15.308 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.308 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.308 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:15.308 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:15.308 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.308 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.308 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.308 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.308 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.308 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.308 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.566 00:21:15.566 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.566 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.566 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.824 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.824 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.824 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.824 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.824 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.824 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.824 { 00:21:15.824 "cntlid": 105, 00:21:15.824 "qid": 0, 00:21:15.824 "state": "enabled", 00:21:15.824 "thread": "nvmf_tgt_poll_group_000", 00:21:15.824 "listen_address": { 00:21:15.824 "trtype": "TCP", 00:21:15.824 "adrfam": "IPv4", 00:21:15.824 "traddr": "10.0.0.2", 00:21:15.824 "trsvcid": "4420" 00:21:15.824 }, 00:21:15.824 "peer_address": { 00:21:15.824 "trtype": "TCP", 00:21:15.825 "adrfam": "IPv4", 00:21:15.825 "traddr": "10.0.0.1", 00:21:15.825 "trsvcid": "51636" 00:21:15.825 }, 00:21:15.825 "auth": { 00:21:15.825 "state": "completed", 00:21:15.825 "digest": "sha512", 00:21:15.825 "dhgroup": "ffdhe2048" 00:21:15.825 } 00:21:15.825 } 00:21:15.825 ]' 00:21:15.825 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.825 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.825 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.825 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.825 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.825 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.825 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.825 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.083 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:21:17.017 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.017 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.017 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.017 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.275 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.275 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.275 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.275 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.533 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:17.533 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.533 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.533 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:17.533 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:17.533 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.533 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.533 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.533 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.533 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.533 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.533 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.790 00:21:17.790 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.791 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.791 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.048 { 00:21:18.048 "cntlid": 107, 00:21:18.048 "qid": 0, 00:21:18.048 "state": "enabled", 00:21:18.048 "thread": "nvmf_tgt_poll_group_000", 00:21:18.048 "listen_address": { 00:21:18.048 "trtype": "TCP", 00:21:18.048 "adrfam": "IPv4", 00:21:18.048 "traddr": "10.0.0.2", 00:21:18.048 "trsvcid": "4420" 00:21:18.048 }, 00:21:18.048 "peer_address": { 00:21:18.048 "trtype": "TCP", 00:21:18.048 "adrfam": "IPv4", 00:21:18.048 "traddr": "10.0.0.1", 00:21:18.048 "trsvcid": "41398" 00:21:18.048 }, 00:21:18.048 "auth": { 00:21:18.048 "state": "completed", 00:21:18.048 "digest": "sha512", 00:21:18.048 "dhgroup": "ffdhe2048" 00:21:18.048 } 00:21:18.048 } 00:21:18.048 ]' 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.048 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.306 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:21:19.239 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.497 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.497 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.497 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.497 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.497 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.497 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.497 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.755 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:19.755 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.755 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.755 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:19.755 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:19.755 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.755 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.755 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.755 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.755 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.755 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.755 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.012 00:21:20.012 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.012 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.012 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.268 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.268 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.268 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.268 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.268 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.268 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.268 { 00:21:20.268 "cntlid": 109, 00:21:20.268 "qid": 0, 00:21:20.268 "state": "enabled", 00:21:20.268 "thread": "nvmf_tgt_poll_group_000", 00:21:20.268 "listen_address": { 00:21:20.268 "trtype": "TCP", 00:21:20.268 "adrfam": "IPv4", 00:21:20.268 "traddr": "10.0.0.2", 00:21:20.268 "trsvcid": "4420" 00:21:20.268 }, 00:21:20.268 "peer_address": { 00:21:20.268 "trtype": "TCP", 00:21:20.268 "adrfam": "IPv4", 00:21:20.269 "traddr": "10.0.0.1", 00:21:20.269 "trsvcid": "41430" 00:21:20.269 }, 00:21:20.269 "auth": { 00:21:20.269 "state": "completed", 00:21:20.269 "digest": "sha512", 00:21:20.269 "dhgroup": "ffdhe2048" 00:21:20.269 } 00:21:20.269 } 00:21:20.269 ]' 00:21:20.269 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.269 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.269 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.269 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:20.269 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.269 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.269 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.269 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.526 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:21:21.461 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.461 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.461 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.461 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.461 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.461 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.461 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:21.461 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:21.719 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:21.719 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.719 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.719 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:21.719 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:21.719 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.719 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:21.719 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.719 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.719 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.719 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.719 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:22.285 00:21:22.285 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.285 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.285 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.285 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.285 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.285 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.285 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.285 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.285 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.285 { 00:21:22.285 "cntlid": 111, 00:21:22.285 "qid": 0, 00:21:22.285 "state": "enabled", 00:21:22.285 "thread": "nvmf_tgt_poll_group_000", 00:21:22.285 "listen_address": { 00:21:22.285 "trtype": "TCP", 00:21:22.285 "adrfam": "IPv4", 00:21:22.285 "traddr": "10.0.0.2", 00:21:22.285 "trsvcid": "4420" 00:21:22.285 }, 00:21:22.285 "peer_address": { 00:21:22.285 "trtype": "TCP", 00:21:22.285 "adrfam": "IPv4", 00:21:22.285 "traddr": "10.0.0.1", 00:21:22.285 "trsvcid": "41460" 00:21:22.285 }, 00:21:22.285 "auth": { 00:21:22.285 "state": "completed", 00:21:22.285 "digest": "sha512", 00:21:22.285 "dhgroup": "ffdhe2048" 00:21:22.285 } 00:21:22.285 } 00:21:22.285 ]' 00:21:22.285 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.543 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.543 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.543 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:22.543 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.543 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.543 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.543 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.801 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:21:23.735 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.735 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.735 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.735 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.735 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.735 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.735 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.735 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.735 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.993 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:23.993 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.993 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.993 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:23.993 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:23.993 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.993 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.993 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.993 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.993 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.993 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.993 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.251 00:21:24.251 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.251 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.251 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.510 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.510 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.510 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.510 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.510 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.510 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.510 { 00:21:24.510 "cntlid": 113, 00:21:24.510 "qid": 0, 00:21:24.510 "state": "enabled", 00:21:24.510 "thread": "nvmf_tgt_poll_group_000", 00:21:24.510 "listen_address": { 00:21:24.510 "trtype": "TCP", 00:21:24.510 "adrfam": "IPv4", 00:21:24.510 "traddr": "10.0.0.2", 00:21:24.510 "trsvcid": "4420" 00:21:24.510 }, 00:21:24.510 "peer_address": { 00:21:24.510 "trtype": "TCP", 00:21:24.510 "adrfam": "IPv4", 00:21:24.510 "traddr": "10.0.0.1", 00:21:24.510 "trsvcid": "41486" 00:21:24.510 }, 00:21:24.510 "auth": { 00:21:24.510 "state": "completed", 00:21:24.510 "digest": "sha512", 00:21:24.510 "dhgroup": "ffdhe3072" 00:21:24.510 } 00:21:24.510 } 00:21:24.510 ]' 00:21:24.510 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.510 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.510 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.510 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.510 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.768 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.768 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.768 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.027 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:21:25.961 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.961 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.961 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.961 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.961 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.961 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.961 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:25.961 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.219 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:26.219 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.219 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.219 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:26.219 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:26.219 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.219 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.219 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.219 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.219 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.219 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.219 05:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.477 00:21:26.477 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.478 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.478 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.736 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.736 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.736 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.736 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.736 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.736 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.736 { 00:21:26.736 "cntlid": 115, 00:21:26.736 "qid": 0, 00:21:26.736 "state": "enabled", 00:21:26.736 "thread": "nvmf_tgt_poll_group_000", 00:21:26.736 "listen_address": { 00:21:26.736 "trtype": "TCP", 00:21:26.736 "adrfam": "IPv4", 00:21:26.736 "traddr": "10.0.0.2", 00:21:26.736 "trsvcid": "4420" 00:21:26.736 }, 00:21:26.736 "peer_address": { 00:21:26.736 "trtype": "TCP", 00:21:26.736 "adrfam": "IPv4", 00:21:26.736 "traddr": "10.0.0.1", 00:21:26.736 "trsvcid": "47506" 00:21:26.736 }, 00:21:26.736 "auth": { 00:21:26.736 "state": "completed", 00:21:26.736 "digest": "sha512", 00:21:26.736 "dhgroup": "ffdhe3072" 00:21:26.736 } 00:21:26.736 } 00:21:26.736 ]' 00:21:26.736 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.736 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.736 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.994 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:26.994 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.994 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.994 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.994 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.252 05:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:21:28.186 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.186 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.186 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.186 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.186 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.186 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.186 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.186 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.443 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:28.443 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.443 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.443 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:28.443 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.443 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.443 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.443 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.443 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.443 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.443 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.443 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.730 00:21:28.730 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.730 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.730 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.005 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.005 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.005 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.005 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.005 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.005 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.005 { 00:21:29.005 "cntlid": 117, 00:21:29.005 "qid": 0, 00:21:29.005 "state": "enabled", 00:21:29.006 "thread": "nvmf_tgt_poll_group_000", 00:21:29.006 "listen_address": { 00:21:29.006 "trtype": "TCP", 00:21:29.006 "adrfam": "IPv4", 00:21:29.006 "traddr": "10.0.0.2", 00:21:29.006 "trsvcid": "4420" 00:21:29.006 }, 00:21:29.006 "peer_address": { 00:21:29.006 "trtype": "TCP", 00:21:29.006 "adrfam": "IPv4", 00:21:29.006 "traddr": "10.0.0.1", 00:21:29.006 "trsvcid": "47530" 00:21:29.006 }, 00:21:29.006 "auth": { 00:21:29.006 "state": "completed", 00:21:29.006 "digest": "sha512", 00:21:29.006 "dhgroup": "ffdhe3072" 00:21:29.006 } 00:21:29.006 } 00:21:29.006 ]' 00:21:29.006 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.006 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.006 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.006 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:29.006 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.263 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.263 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.263 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.263 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:21:30.195 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.453 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.453 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.453 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.453 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.453 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.453 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.453 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.711 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:30.711 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.711 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.711 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:30.711 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.711 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.711 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:30.711 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.711 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.711 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.711 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.711 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.969 00:21:30.969 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.969 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.969 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.227 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.227 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.227 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.227 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.227 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.227 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.227 { 00:21:31.227 "cntlid": 119, 00:21:31.227 "qid": 0, 00:21:31.227 "state": "enabled", 00:21:31.227 "thread": "nvmf_tgt_poll_group_000", 00:21:31.227 "listen_address": { 00:21:31.227 "trtype": "TCP", 00:21:31.227 "adrfam": "IPv4", 00:21:31.227 "traddr": "10.0.0.2", 00:21:31.227 "trsvcid": "4420" 00:21:31.227 }, 00:21:31.227 "peer_address": { 00:21:31.227 "trtype": "TCP", 00:21:31.227 "adrfam": "IPv4", 00:21:31.227 "traddr": "10.0.0.1", 00:21:31.227 "trsvcid": "47566" 00:21:31.227 }, 00:21:31.227 "auth": { 00:21:31.227 "state": "completed", 00:21:31.227 "digest": "sha512", 00:21:31.227 "dhgroup": "ffdhe3072" 00:21:31.227 } 00:21:31.227 } 00:21:31.227 ]' 00:21:31.227 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.227 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.227 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.227 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:31.227 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.484 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.484 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.484 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.742 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:21:32.674 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.674 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.674 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.674 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.674 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.674 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.674 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.674 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.674 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.931 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:32.931 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.931 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:32.931 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:32.931 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:32.931 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.931 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.932 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.932 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.932 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.932 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.932 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.498 00:21:33.498 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.498 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.498 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.498 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.498 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.498 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.498 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.498 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.498 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.498 { 00:21:33.498 "cntlid": 121, 00:21:33.498 "qid": 0, 00:21:33.498 "state": "enabled", 00:21:33.498 "thread": "nvmf_tgt_poll_group_000", 00:21:33.498 "listen_address": { 00:21:33.498 "trtype": "TCP", 00:21:33.498 "adrfam": "IPv4", 00:21:33.498 "traddr": "10.0.0.2", 00:21:33.498 "trsvcid": "4420" 00:21:33.498 }, 00:21:33.498 "peer_address": { 00:21:33.498 "trtype": "TCP", 00:21:33.498 "adrfam": "IPv4", 00:21:33.498 "traddr": "10.0.0.1", 00:21:33.498 "trsvcid": "47598" 00:21:33.498 }, 00:21:33.498 "auth": { 00:21:33.498 "state": "completed", 00:21:33.498 "digest": "sha512", 00:21:33.498 "dhgroup": "ffdhe4096" 00:21:33.498 } 00:21:33.498 } 00:21:33.498 ]' 00:21:33.498 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.756 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.756 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.756 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:33.756 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.756 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.756 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.756 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.013 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:21:34.946 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.946 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.946 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.946 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.946 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.946 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.946 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:34.946 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.203 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:35.203 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.203 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.203 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:35.203 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:35.203 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.203 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.203 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.203 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.203 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.203 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.203 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.768 00:21:35.768 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.768 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.768 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.026 { 00:21:36.026 "cntlid": 123, 00:21:36.026 "qid": 0, 00:21:36.026 "state": "enabled", 00:21:36.026 "thread": "nvmf_tgt_poll_group_000", 00:21:36.026 "listen_address": { 00:21:36.026 "trtype": "TCP", 00:21:36.026 "adrfam": "IPv4", 00:21:36.026 "traddr": "10.0.0.2", 00:21:36.026 "trsvcid": "4420" 00:21:36.026 }, 00:21:36.026 "peer_address": { 00:21:36.026 "trtype": "TCP", 00:21:36.026 "adrfam": "IPv4", 00:21:36.026 "traddr": "10.0.0.1", 00:21:36.026 "trsvcid": "47620" 00:21:36.026 }, 00:21:36.026 "auth": { 00:21:36.026 "state": "completed", 00:21:36.026 "digest": "sha512", 00:21:36.026 "dhgroup": "ffdhe4096" 00:21:36.026 } 00:21:36.026 } 00:21:36.026 ]' 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.026 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.284 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:21:37.217 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.217 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.217 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.217 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.217 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.217 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.217 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.217 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.474 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:37.474 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.474 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.474 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:37.474 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:37.474 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.474 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.474 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.474 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.474 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.475 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.475 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.039 00:21:38.039 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.039 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.039 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.297 { 00:21:38.297 "cntlid": 125, 00:21:38.297 "qid": 0, 00:21:38.297 "state": "enabled", 00:21:38.297 "thread": "nvmf_tgt_poll_group_000", 00:21:38.297 "listen_address": { 00:21:38.297 "trtype": "TCP", 00:21:38.297 "adrfam": "IPv4", 00:21:38.297 "traddr": "10.0.0.2", 00:21:38.297 "trsvcid": "4420" 00:21:38.297 }, 00:21:38.297 "peer_address": { 00:21:38.297 "trtype": "TCP", 00:21:38.297 "adrfam": "IPv4", 00:21:38.297 "traddr": "10.0.0.1", 00:21:38.297 "trsvcid": "47502" 00:21:38.297 }, 00:21:38.297 "auth": { 00:21:38.297 "state": "completed", 00:21:38.297 "digest": "sha512", 00:21:38.297 "dhgroup": "ffdhe4096" 00:21:38.297 } 00:21:38.297 } 00:21:38.297 ]' 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.297 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.554 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:21:39.488 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.489 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.489 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.489 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.489 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.489 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.489 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:39.489 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:39.747 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:39.747 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.747 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.747 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:39.747 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:39.747 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.747 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:39.747 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.747 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.747 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.747 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.747 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.311 00:21:40.311 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.311 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.311 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.569 { 00:21:40.569 "cntlid": 127, 00:21:40.569 "qid": 0, 00:21:40.569 "state": "enabled", 00:21:40.569 "thread": "nvmf_tgt_poll_group_000", 00:21:40.569 "listen_address": { 00:21:40.569 "trtype": "TCP", 00:21:40.569 "adrfam": "IPv4", 00:21:40.569 "traddr": "10.0.0.2", 00:21:40.569 "trsvcid": "4420" 00:21:40.569 }, 00:21:40.569 "peer_address": { 00:21:40.569 "trtype": "TCP", 00:21:40.569 "adrfam": "IPv4", 00:21:40.569 "traddr": "10.0.0.1", 00:21:40.569 "trsvcid": "47520" 00:21:40.569 }, 00:21:40.569 "auth": { 00:21:40.569 "state": "completed", 00:21:40.569 "digest": "sha512", 00:21:40.569 "dhgroup": "ffdhe4096" 00:21:40.569 } 00:21:40.569 } 00:21:40.569 ]' 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.569 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.827 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:21:41.761 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.761 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.761 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.761 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.761 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.761 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.761 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.761 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:41.761 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.019 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:42.019 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.019 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.019 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:42.019 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:42.019 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.019 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.019 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.019 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.019 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.019 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.019 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.610 00:21:42.610 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.610 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.610 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.885 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.885 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.885 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.885 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.885 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.885 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.885 { 00:21:42.885 "cntlid": 129, 00:21:42.885 "qid": 0, 00:21:42.885 "state": "enabled", 00:21:42.885 "thread": "nvmf_tgt_poll_group_000", 00:21:42.885 "listen_address": { 00:21:42.885 "trtype": "TCP", 00:21:42.885 "adrfam": "IPv4", 00:21:42.885 "traddr": "10.0.0.2", 00:21:42.885 "trsvcid": "4420" 00:21:42.885 }, 00:21:42.885 "peer_address": { 00:21:42.885 "trtype": "TCP", 00:21:42.885 "adrfam": "IPv4", 00:21:42.885 "traddr": "10.0.0.1", 00:21:42.885 "trsvcid": "47562" 00:21:42.885 }, 00:21:42.885 "auth": { 00:21:42.885 "state": "completed", 00:21:42.885 "digest": "sha512", 00:21:42.885 "dhgroup": "ffdhe6144" 00:21:42.885 } 00:21:42.885 } 00:21:42.885 ]' 00:21:42.885 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.885 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.885 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.885 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:42.885 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.143 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.143 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.143 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.401 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:21:44.334 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.334 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.334 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.334 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.334 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.334 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.334 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.334 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.593 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:44.593 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.593 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.593 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:44.593 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:44.593 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.593 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.593 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.593 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.593 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.593 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.593 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.159 00:21:45.159 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.159 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.159 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.159 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.159 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.159 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.159 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.159 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.159 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.159 { 00:21:45.159 "cntlid": 131, 00:21:45.159 "qid": 0, 00:21:45.159 "state": "enabled", 00:21:45.159 "thread": "nvmf_tgt_poll_group_000", 00:21:45.159 "listen_address": { 00:21:45.159 "trtype": "TCP", 00:21:45.159 "adrfam": "IPv4", 00:21:45.159 "traddr": "10.0.0.2", 00:21:45.159 "trsvcid": "4420" 00:21:45.159 }, 00:21:45.159 "peer_address": { 00:21:45.159 "trtype": "TCP", 00:21:45.159 "adrfam": "IPv4", 00:21:45.159 "traddr": "10.0.0.1", 00:21:45.159 "trsvcid": "47588" 00:21:45.159 }, 00:21:45.159 "auth": { 00:21:45.159 "state": "completed", 00:21:45.159 "digest": "sha512", 00:21:45.159 "dhgroup": "ffdhe6144" 00:21:45.159 } 00:21:45.159 } 00:21:45.159 ]' 00:21:45.159 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.418 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.418 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.418 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:45.418 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.418 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.418 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.418 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.677 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:21:46.611 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.611 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.611 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.611 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.611 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.611 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.611 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.611 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.869 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:46.869 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.869 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:46.869 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:46.869 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:46.870 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.870 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.870 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.870 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.870 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.870 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.870 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.437 00:21:47.437 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.438 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.438 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.695 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.695 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.695 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.695 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.695 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.695 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.695 { 00:21:47.695 "cntlid": 133, 00:21:47.695 "qid": 0, 00:21:47.695 "state": "enabled", 00:21:47.695 "thread": "nvmf_tgt_poll_group_000", 00:21:47.695 "listen_address": { 00:21:47.695 "trtype": "TCP", 00:21:47.695 "adrfam": "IPv4", 00:21:47.695 "traddr": "10.0.0.2", 00:21:47.695 "trsvcid": "4420" 00:21:47.695 }, 00:21:47.695 "peer_address": { 00:21:47.695 "trtype": "TCP", 00:21:47.695 "adrfam": "IPv4", 00:21:47.695 "traddr": "10.0.0.1", 00:21:47.695 "trsvcid": "46148" 00:21:47.695 }, 00:21:47.695 "auth": { 00:21:47.695 "state": "completed", 00:21:47.695 "digest": "sha512", 00:21:47.695 "dhgroup": "ffdhe6144" 00:21:47.695 } 00:21:47.695 } 00:21:47.696 ]' 00:21:47.696 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.696 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.696 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.696 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.696 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.696 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.696 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.696 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.954 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:21:48.888 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.147 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.147 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.147 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.147 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.147 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.147 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.147 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.406 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:49.406 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.406 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.406 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:49.406 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:49.406 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.406 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:49.406 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.406 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.406 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.406 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:49.406 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:49.972 00:21:49.972 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.972 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.972 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.231 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.231 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.231 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.231 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.231 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.231 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.231 { 00:21:50.231 "cntlid": 135, 00:21:50.231 "qid": 0, 00:21:50.231 "state": "enabled", 00:21:50.231 "thread": "nvmf_tgt_poll_group_000", 00:21:50.231 "listen_address": { 00:21:50.231 "trtype": "TCP", 00:21:50.231 "adrfam": "IPv4", 00:21:50.231 "traddr": "10.0.0.2", 00:21:50.231 "trsvcid": "4420" 00:21:50.231 }, 00:21:50.231 "peer_address": { 00:21:50.231 "trtype": "TCP", 00:21:50.231 "adrfam": "IPv4", 00:21:50.231 "traddr": "10.0.0.1", 00:21:50.231 "trsvcid": "46184" 00:21:50.231 }, 00:21:50.231 "auth": { 00:21:50.231 "state": "completed", 00:21:50.232 "digest": "sha512", 00:21:50.232 "dhgroup": "ffdhe6144" 00:21:50.232 } 00:21:50.232 } 00:21:50.232 ]' 00:21:50.232 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.232 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.232 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.232 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.232 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.232 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.232 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.232 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.490 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:21:51.424 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.424 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.424 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.424 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.682 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.682 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.682 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.682 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.682 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.940 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:51.940 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.940 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.940 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:51.940 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:51.940 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.940 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.940 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.940 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.940 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.940 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.940 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.872 00:21:52.872 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.872 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.872 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.131 { 00:21:53.131 "cntlid": 137, 00:21:53.131 "qid": 0, 00:21:53.131 "state": "enabled", 00:21:53.131 "thread": "nvmf_tgt_poll_group_000", 00:21:53.131 "listen_address": { 00:21:53.131 "trtype": "TCP", 00:21:53.131 "adrfam": "IPv4", 00:21:53.131 "traddr": "10.0.0.2", 00:21:53.131 "trsvcid": "4420" 00:21:53.131 }, 00:21:53.131 "peer_address": { 00:21:53.131 "trtype": "TCP", 00:21:53.131 "adrfam": "IPv4", 00:21:53.131 "traddr": "10.0.0.1", 00:21:53.131 "trsvcid": "46212" 00:21:53.131 }, 00:21:53.131 "auth": { 00:21:53.131 "state": "completed", 00:21:53.131 "digest": "sha512", 00:21:53.131 "dhgroup": "ffdhe8192" 00:21:53.131 } 00:21:53.131 } 00:21:53.131 ]' 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.131 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.389 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:21:54.347 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.347 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.347 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.347 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.347 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.347 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.347 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.347 05:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.605 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:54.605 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.605 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.605 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:54.605 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:54.605 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.605 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.605 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.605 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.605 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.605 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.605 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.539 00:21:55.539 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.539 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.539 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.539 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.539 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.539 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.539 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.539 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.539 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.539 { 00:21:55.539 "cntlid": 139, 00:21:55.539 "qid": 0, 00:21:55.539 "state": "enabled", 00:21:55.539 "thread": "nvmf_tgt_poll_group_000", 00:21:55.539 "listen_address": { 00:21:55.539 "trtype": "TCP", 00:21:55.539 "adrfam": "IPv4", 00:21:55.539 "traddr": "10.0.0.2", 00:21:55.539 "trsvcid": "4420" 00:21:55.539 }, 00:21:55.539 "peer_address": { 00:21:55.539 "trtype": "TCP", 00:21:55.539 "adrfam": "IPv4", 00:21:55.539 "traddr": "10.0.0.1", 00:21:55.539 "trsvcid": "46228" 00:21:55.539 }, 00:21:55.539 "auth": { 00:21:55.539 "state": "completed", 00:21:55.539 "digest": "sha512", 00:21:55.539 "dhgroup": "ffdhe8192" 00:21:55.539 } 00:21:55.539 } 00:21:55.539 ]' 00:21:55.539 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.539 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.539 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.797 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.797 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.797 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.797 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.797 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.054 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjQ5ZTMwNTA4NmEzZDdkMjAwYjE2YTViMDM5ZWUyYTmcGKoy: --dhchap-ctrl-secret DHHC-1:02:ODU3ZjlkY2FiOTdkN2I4ZGM5MWUyZjUxYWMzZDUxMmIzNDMzYzljZDcyMjg4MjUyiQkeGw==: 00:21:57.034 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.034 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.034 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.034 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.034 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.034 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.034 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.034 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.292 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:57.292 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.292 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.292 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:57.292 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:57.292 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.292 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.292 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.292 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.292 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.292 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.292 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.223 00:21:58.223 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.223 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.223 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.481 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.481 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.481 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.481 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.481 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.481 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.481 { 00:21:58.481 "cntlid": 141, 00:21:58.481 "qid": 0, 00:21:58.481 "state": "enabled", 00:21:58.481 "thread": "nvmf_tgt_poll_group_000", 00:21:58.481 "listen_address": { 00:21:58.481 "trtype": "TCP", 00:21:58.481 "adrfam": "IPv4", 00:21:58.481 "traddr": "10.0.0.2", 00:21:58.481 "trsvcid": "4420" 00:21:58.481 }, 00:21:58.481 "peer_address": { 00:21:58.481 "trtype": "TCP", 00:21:58.481 "adrfam": "IPv4", 00:21:58.481 "traddr": "10.0.0.1", 00:21:58.481 "trsvcid": "45766" 00:21:58.481 }, 00:21:58.481 "auth": { 00:21:58.481 "state": "completed", 00:21:58.481 "digest": "sha512", 00:21:58.481 "dhgroup": "ffdhe8192" 00:21:58.481 } 00:21:58.481 } 00:21:58.481 ]' 00:21:58.481 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.481 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.481 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.481 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.481 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.481 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.481 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.481 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.738 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDI2OTllN2NhMGU5NTJmYzcxNGNhMzM4Mjg5YmRjMGYzYmY3OGQwZWY4NTYxYTA2/9KsBw==: --dhchap-ctrl-secret DHHC-1:01:MzlmODM0Nzk4MWZiYTQ0OWJhZDkyZTg5NWM4MDZmM2XysXUN: 00:21:59.670 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.670 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.670 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.670 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.670 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.670 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.670 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.670 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.927 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:59.927 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.927 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.927 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:59.927 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:59.927 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.927 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:59.927 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.927 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.927 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.928 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:59.928 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:00.861 00:22:00.861 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.861 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.861 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.118 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.118 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.119 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.119 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.119 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.119 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.119 { 00:22:01.119 "cntlid": 143, 00:22:01.119 "qid": 0, 00:22:01.119 "state": "enabled", 00:22:01.119 "thread": "nvmf_tgt_poll_group_000", 00:22:01.119 "listen_address": { 00:22:01.119 "trtype": "TCP", 00:22:01.119 "adrfam": "IPv4", 00:22:01.119 "traddr": "10.0.0.2", 00:22:01.119 "trsvcid": "4420" 00:22:01.119 }, 00:22:01.119 "peer_address": { 00:22:01.119 "trtype": "TCP", 00:22:01.119 "adrfam": "IPv4", 00:22:01.119 "traddr": "10.0.0.1", 00:22:01.119 "trsvcid": "45802" 00:22:01.119 }, 00:22:01.119 "auth": { 00:22:01.119 "state": "completed", 00:22:01.119 "digest": "sha512", 00:22:01.119 "dhgroup": "ffdhe8192" 00:22:01.119 } 00:22:01.119 } 00:22:01.119 ]' 00:22:01.119 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.119 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.119 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.119 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:01.119 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.377 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.377 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.377 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.634 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:22:02.566 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.566 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.566 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.566 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.566 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.566 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:02.566 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:02.566 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:02.566 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.566 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.566 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.824 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:02.824 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.824 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.824 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:02.824 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:02.824 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.824 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.824 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.824 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.824 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.824 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.824 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.757 00:22:03.757 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.757 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.757 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.015 { 00:22:04.015 "cntlid": 145, 00:22:04.015 "qid": 0, 00:22:04.015 "state": "enabled", 00:22:04.015 "thread": "nvmf_tgt_poll_group_000", 00:22:04.015 "listen_address": { 00:22:04.015 "trtype": "TCP", 00:22:04.015 "adrfam": "IPv4", 00:22:04.015 "traddr": "10.0.0.2", 00:22:04.015 "trsvcid": "4420" 00:22:04.015 }, 00:22:04.015 "peer_address": { 00:22:04.015 "trtype": "TCP", 00:22:04.015 "adrfam": "IPv4", 00:22:04.015 "traddr": "10.0.0.1", 00:22:04.015 "trsvcid": "45844" 00:22:04.015 }, 00:22:04.015 "auth": { 00:22:04.015 "state": "completed", 00:22:04.015 "digest": "sha512", 00:22:04.015 "dhgroup": "ffdhe8192" 00:22:04.015 } 00:22:04.015 } 00:22:04.015 ]' 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.015 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.273 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjkzMmMzNzA3MjBhZjYzZmMyNjU0ZDgwZDUxNjcyMzAzMmMzMTAxZWJjNmZkMzA40y5FBg==: --dhchap-ctrl-secret DHHC-1:03:NjMxNDA5ZDdjZDc2NzVkYzc5YzYyYzcyOTg2Y2VjZjk5Y2U2MDBhNzViOTMwOWRlYWRhY2NhMGRlYzYxMDA4ZYVf6cg=: 00:22:05.207 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.208 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.208 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.208 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.208 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.208 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:05.208 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.208 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.466 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.466 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:05.466 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:05.466 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:05.466 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:05.466 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.466 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:05.466 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.466 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:05.466 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:06.402 request: 00:22:06.402 { 00:22:06.402 "name": "nvme0", 00:22:06.402 "trtype": "tcp", 00:22:06.402 "traddr": "10.0.0.2", 00:22:06.402 "adrfam": "ipv4", 00:22:06.402 "trsvcid": "4420", 00:22:06.402 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.402 "prchk_reftag": false, 00:22:06.402 "prchk_guard": false, 00:22:06.402 "hdgst": false, 00:22:06.402 "ddgst": false, 00:22:06.402 "dhchap_key": "key2", 00:22:06.402 "method": "bdev_nvme_attach_controller", 00:22:06.402 "req_id": 1 00:22:06.402 } 00:22:06.402 Got JSON-RPC error response 00:22:06.402 response: 00:22:06.402 { 00:22:06.402 "code": -5, 00:22:06.402 "message": "Input/output error" 00:22:06.402 } 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:06.402 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:06.969 request: 00:22:06.969 { 00:22:06.969 "name": "nvme0", 00:22:06.969 "trtype": "tcp", 00:22:06.969 "traddr": "10.0.0.2", 00:22:06.969 "adrfam": "ipv4", 00:22:06.969 "trsvcid": "4420", 00:22:06.969 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.969 "prchk_reftag": false, 00:22:06.969 "prchk_guard": false, 00:22:06.969 "hdgst": false, 00:22:06.969 "ddgst": false, 00:22:06.969 "dhchap_key": "key1", 00:22:06.969 "dhchap_ctrlr_key": "ckey2", 00:22:06.969 "method": "bdev_nvme_attach_controller", 00:22:06.969 "req_id": 1 00:22:06.969 } 00:22:06.969 Got JSON-RPC error response 00:22:06.969 response: 00:22:06.969 { 00:22:06.969 "code": -5, 00:22:06.969 "message": "Input/output error" 00:22:06.969 } 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:06.969 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.970 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:06.970 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.970 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.970 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.903 request: 00:22:07.903 { 00:22:07.903 "name": "nvme0", 00:22:07.903 "trtype": "tcp", 00:22:07.903 "traddr": "10.0.0.2", 00:22:07.903 "adrfam": "ipv4", 00:22:07.903 "trsvcid": "4420", 00:22:07.903 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:07.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.903 "prchk_reftag": false, 00:22:07.903 "prchk_guard": false, 00:22:07.903 "hdgst": false, 00:22:07.903 "ddgst": false, 00:22:07.903 "dhchap_key": "key1", 00:22:07.903 "dhchap_ctrlr_key": "ckey1", 00:22:07.903 "method": "bdev_nvme_attach_controller", 00:22:07.903 "req_id": 1 00:22:07.903 } 00:22:07.903 Got JSON-RPC error response 00:22:07.903 response: 00:22:07.903 { 00:22:07.903 "code": -5, 00:22:07.903 "message": "Input/output error" 00:22:07.903 } 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1632207 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1632207 ']' 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1632207 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1632207 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1632207' 00:22:07.903 killing process with pid 1632207 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1632207 00:22:07.903 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1632207 00:22:08.161 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:08.161 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:08.161 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:08.161 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.161 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1654611 00:22:08.161 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:08.161 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1654611 00:22:08.161 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1654611 ']' 00:22:08.161 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.161 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:08.161 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.161 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:08.161 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1654611 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1654611 ']' 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:08.429 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.690 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:08.690 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:08.690 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:08.690 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.690 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:08.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:08.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:08.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:08.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:08.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.878 00:22:09.878 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.878 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.878 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.135 { 00:22:10.135 "cntlid": 1, 00:22:10.135 "qid": 0, 00:22:10.135 "state": "enabled", 00:22:10.135 "thread": "nvmf_tgt_poll_group_000", 00:22:10.135 "listen_address": { 00:22:10.135 "trtype": "TCP", 00:22:10.135 "adrfam": "IPv4", 00:22:10.135 "traddr": "10.0.0.2", 00:22:10.135 "trsvcid": "4420" 00:22:10.135 }, 00:22:10.135 "peer_address": { 00:22:10.135 "trtype": "TCP", 00:22:10.135 "adrfam": "IPv4", 00:22:10.135 "traddr": "10.0.0.1", 00:22:10.135 "trsvcid": "43200" 00:22:10.135 }, 00:22:10.135 "auth": { 00:22:10.135 "state": "completed", 00:22:10.135 "digest": "sha512", 00:22:10.135 "dhgroup": "ffdhe8192" 00:22:10.135 } 00:22:10.135 } 00:22:10.135 ]' 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.135 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.392 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmI0M2EwNzY0ZDY4ZWQ5ZmU5NzNkY2QyN2I4NmFkMGNmOWFiYTFkYTU4NGRlYzY3Mjk1YzZjYjg3OGQyYjgwNa2TqiM=: 00:22:11.362 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.362 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.362 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.362 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.362 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.362 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:11.362 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.362 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.362 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.362 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:11.362 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:11.620 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.620 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:11.620 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.620 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:11.620 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:11.620 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:11.620 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:11.620 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.620 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.877 request: 00:22:11.877 { 00:22:11.877 "name": "nvme0", 00:22:11.877 "trtype": "tcp", 00:22:11.877 "traddr": "10.0.0.2", 00:22:11.877 "adrfam": "ipv4", 00:22:11.877 "trsvcid": "4420", 00:22:11.877 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:11.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.877 "prchk_reftag": false, 00:22:11.877 "prchk_guard": false, 00:22:11.877 "hdgst": false, 00:22:11.877 "ddgst": false, 00:22:11.877 "dhchap_key": "key3", 00:22:11.877 "method": "bdev_nvme_attach_controller", 00:22:11.877 "req_id": 1 00:22:11.877 } 00:22:11.877 Got JSON-RPC error response 00:22:11.877 response: 00:22:11.877 { 00:22:11.877 "code": -5, 00:22:11.877 "message": "Input/output error" 00:22:11.877 } 00:22:11.877 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:11.877 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:11.877 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:11.877 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:11.877 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:11.877 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:11.877 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:11.878 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:12.136 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.136 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:12.136 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.136 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:12.136 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:12.136 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:12.136 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:12.136 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.136 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.393 request: 00:22:12.393 { 00:22:12.393 "name": "nvme0", 00:22:12.393 "trtype": "tcp", 00:22:12.393 "traddr": "10.0.0.2", 00:22:12.393 "adrfam": "ipv4", 00:22:12.393 "trsvcid": "4420", 00:22:12.393 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.393 "prchk_reftag": false, 00:22:12.393 "prchk_guard": false, 00:22:12.393 "hdgst": false, 00:22:12.393 "ddgst": false, 00:22:12.393 "dhchap_key": "key3", 00:22:12.393 "method": "bdev_nvme_attach_controller", 00:22:12.393 "req_id": 1 00:22:12.393 } 00:22:12.393 Got JSON-RPC error response 00:22:12.393 response: 00:22:12.393 { 00:22:12.393 "code": -5, 00:22:12.393 "message": "Input/output error" 00:22:12.393 } 00:22:12.393 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:12.393 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:12.393 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:12.393 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:12.393 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:12.393 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:12.393 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:12.393 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.393 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.394 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.651 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.651 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.651 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.908 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.908 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.909 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.909 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.909 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.909 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.909 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:12.909 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.909 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:12.909 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:12.909 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:12.909 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:12.909 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.909 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.909 request: 00:22:12.909 { 00:22:12.909 "name": "nvme0", 00:22:12.909 "trtype": "tcp", 00:22:12.909 "traddr": "10.0.0.2", 00:22:12.909 "adrfam": "ipv4", 00:22:12.909 "trsvcid": "4420", 00:22:12.909 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.909 "prchk_reftag": false, 00:22:12.909 "prchk_guard": false, 00:22:12.909 "hdgst": false, 00:22:12.909 "ddgst": false, 00:22:12.909 "dhchap_key": "key0", 00:22:12.909 "dhchap_ctrlr_key": "key1", 00:22:12.909 "method": "bdev_nvme_attach_controller", 00:22:12.909 "req_id": 1 00:22:12.909 } 00:22:12.909 Got JSON-RPC error response 00:22:12.909 response: 00:22:12.909 { 00:22:12.909 "code": -5, 00:22:12.909 "message": "Input/output error" 00:22:12.909 } 00:22:13.166 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:13.166 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:13.166 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:13.166 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:13.166 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:13.166 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:13.424 00:22:13.424 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:13.424 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:13.424 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.682 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.682 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.682 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.939 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:13.939 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:13.939 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1632234 00:22:13.939 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1632234 ']' 00:22:13.939 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1632234 00:22:13.939 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:13.939 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:13.939 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1632234 00:22:13.939 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:13.939 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:13.939 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1632234' 00:22:13.939 killing process with pid 1632234 00:22:13.939 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1632234 00:22:13.939 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1632234 00:22:14.196 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:14.196 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:14.196 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:14.196 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:14.196 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:14.196 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:14.196 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:14.196 rmmod nvme_tcp 00:22:14.196 rmmod nvme_fabrics 00:22:14.196 rmmod nvme_keyring 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1654611 ']' 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1654611 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1654611 ']' 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1654611 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1654611 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1654611' 00:22:14.454 killing process with pid 1654611 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1654611 00:22:14.454 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1654611 00:22:14.713 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:14.713 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:14.713 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:14.713 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.713 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:14.713 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.713 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.713 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.611 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:16.611 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.8xj /tmp/spdk.key-sha256.PST /tmp/spdk.key-sha384.Tsi /tmp/spdk.key-sha512.VbB /tmp/spdk.key-sha512.Vfu /tmp/spdk.key-sha384.pCq /tmp/spdk.key-sha256.t1D '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:16.611 00:22:16.611 real 3m8.823s 00:22:16.611 user 7m19.665s 00:22:16.611 sys 0m25.058s 00:22:16.611 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:16.611 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.611 ************************************ 00:22:16.611 END TEST nvmf_auth_target 00:22:16.611 ************************************ 00:22:16.611 05:42:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:16.611 05:42:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:16.611 05:42:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:16.611 05:42:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:16.611 05:42:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:16.611 ************************************ 00:22:16.611 START TEST nvmf_bdevio_no_huge 00:22:16.611 ************************************ 00:22:16.612 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:16.612 * Looking for test storage... 00:22:16.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.870 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.871 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:18.771 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:18.771 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:18.771 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:18.771 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.771 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:18.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:22:18.772 00:22:18.772 --- 10.0.0.2 ping statistics --- 00:22:18.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.772 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:22:18.772 00:22:18.772 --- 10.0.0.1 ping statistics --- 00:22:18.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.772 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1657774 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1657774 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1657774 ']' 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:18.772 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:18.772 [2024-07-25 05:42:12.406137] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:22:18.772 [2024-07-25 05:42:12.406238] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:19.031 [2024-07-25 05:42:12.474182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.031 [2024-07-25 05:42:12.557154] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.031 [2024-07-25 05:42:12.557220] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.031 [2024-07-25 05:42:12.557234] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.031 [2024-07-25 05:42:12.557277] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.031 [2024-07-25 05:42:12.557288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.031 [2024-07-25 05:42:12.557375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:19.031 [2024-07-25 05:42:12.557438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:19.031 [2024-07-25 05:42:12.557504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:19.031 [2024-07-25 05:42:12.557506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.031 [2024-07-25 05:42:12.681434] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.031 Malloc0 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.031 [2024-07-25 05:42:12.719674] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:19.031 { 00:22:19.031 "params": { 00:22:19.031 "name": "Nvme$subsystem", 00:22:19.031 "trtype": "$TEST_TRANSPORT", 00:22:19.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.031 "adrfam": "ipv4", 00:22:19.031 "trsvcid": "$NVMF_PORT", 00:22:19.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.031 "hdgst": ${hdgst:-false}, 00:22:19.031 "ddgst": ${ddgst:-false} 00:22:19.031 }, 00:22:19.031 "method": "bdev_nvme_attach_controller" 00:22:19.031 } 00:22:19.031 EOF 00:22:19.031 )") 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:19.031 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:19.031 "params": { 00:22:19.031 "name": "Nvme1", 00:22:19.031 "trtype": "tcp", 00:22:19.031 "traddr": "10.0.0.2", 00:22:19.031 "adrfam": "ipv4", 00:22:19.031 "trsvcid": "4420", 00:22:19.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:19.031 "hdgst": false, 00:22:19.031 "ddgst": false 00:22:19.031 }, 00:22:19.031 "method": "bdev_nvme_attach_controller" 00:22:19.031 }' 00:22:19.289 [2024-07-25 05:42:12.761798] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:22:19.289 [2024-07-25 05:42:12.761891] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1657921 ] 00:22:19.289 [2024-07-25 05:42:12.822906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:19.289 [2024-07-25 05:42:12.909900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.289 [2024-07-25 05:42:12.909951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.289 [2024-07-25 05:42:12.909954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.547 I/O targets: 00:22:19.547 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:19.547 00:22:19.547 00:22:19.547 CUnit - A unit testing framework for C - Version 2.1-3 00:22:19.547 http://cunit.sourceforge.net/ 00:22:19.547 00:22:19.547 00:22:19.547 Suite: bdevio tests on: Nvme1n1 00:22:19.547 Test: blockdev write read block ...passed 00:22:19.807 Test: blockdev write zeroes read block ...passed 00:22:19.807 Test: blockdev write zeroes read no split ...passed 00:22:19.807 Test: blockdev write zeroes read split ...passed 00:22:19.807 Test: blockdev write zeroes read split partial ...passed 00:22:19.807 Test: blockdev reset ...[2024-07-25 05:42:13.399702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:19.807 [2024-07-25 05:42:13.399827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9e4e0 (9): Bad file descriptor 00:22:19.808 [2024-07-25 05:42:13.414145] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:19.808 passed 00:22:19.808 Test: blockdev write read 8 blocks ...passed 00:22:19.808 Test: blockdev write read size > 128k ...passed 00:22:19.808 Test: blockdev write read invalid size ...passed 00:22:19.808 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:19.808 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:19.808 Test: blockdev write read max offset ...passed 00:22:20.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:20.066 Test: blockdev writev readv 8 blocks ...passed 00:22:20.066 Test: blockdev writev readv 30 x 1block ...passed 00:22:20.066 Test: blockdev writev readv block ...passed 00:22:20.066 Test: blockdev writev readv size > 128k ...passed 00:22:20.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:20.066 Test: blockdev comparev and writev ...[2024-07-25 05:42:13.674710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.066 [2024-07-25 05:42:13.674746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:20.066 [2024-07-25 05:42:13.674770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.066 [2024-07-25 05:42:13.674787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:20.066 [2024-07-25 05:42:13.675174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.066 [2024-07-25 05:42:13.675201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:20.066 [2024-07-25 05:42:13.675222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.066 [2024-07-25 05:42:13.675237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:20.066 [2024-07-25 05:42:13.675616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.066 [2024-07-25 05:42:13.675640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:20.066 [2024-07-25 05:42:13.675662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.066 [2024-07-25 05:42:13.675677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:20.066 [2024-07-25 05:42:13.676010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.066 [2024-07-25 05:42:13.676035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:20.066 [2024-07-25 05:42:13.676066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:20.066 [2024-07-25 05:42:13.676081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:20.066 passed 00:22:20.066 Test: blockdev nvme passthru rw ...passed 00:22:20.066 Test: blockdev nvme passthru vendor specific ...[2024-07-25 05:42:13.758591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.066 [2024-07-25 05:42:13.758619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:20.066 [2024-07-25 05:42:13.758808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.066 [2024-07-25 05:42:13.758830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:20.066 [2024-07-25 05:42:13.759004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.066 [2024-07-25 05:42:13.759027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:20.066 [2024-07-25 05:42:13.759216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.066 [2024-07-25 05:42:13.759249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:20.066 passed 00:22:20.324 Test: blockdev nvme admin passthru ...passed 00:22:20.324 Test: blockdev copy ...passed 00:22:20.324 00:22:20.324 Run Summary: Type Total Ran Passed Failed Inactive 00:22:20.324 suites 1 1 n/a 0 0 00:22:20.324 tests 23 23 23 0 0 00:22:20.324 asserts 152 152 152 0 n/a 00:22:20.324 00:22:20.324 Elapsed time = 1.258 seconds 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.583 rmmod nvme_tcp 00:22:20.583 rmmod nvme_fabrics 00:22:20.583 rmmod nvme_keyring 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1657774 ']' 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1657774 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1657774 ']' 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1657774 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1657774 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1657774' 00:22:20.583 killing process with pid 1657774 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1657774 00:22:20.583 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1657774 00:22:21.150 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:21.150 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:21.150 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:21.150 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:21.150 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:21.150 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.150 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.150 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:23.052 00:22:23.052 real 0m6.372s 00:22:23.052 user 0m10.833s 00:22:23.052 sys 0m2.433s 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.052 ************************************ 00:22:23.052 END TEST nvmf_bdevio_no_huge 00:22:23.052 ************************************ 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:23.052 ************************************ 00:22:23.052 START TEST nvmf_tls 00:22:23.052 ************************************ 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:23.052 * Looking for test storage... 00:22:23.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:23.052 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:23.311 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:23.311 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:23.311 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.311 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:23.311 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:23.311 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:23.311 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.311 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.311 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.311 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:23.311 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:23.311 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:23.311 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:25.212 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:25.212 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:25.212 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:25.212 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:25.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:22:25.212 00:22:25.212 --- 10.0.0.2 ping statistics --- 00:22:25.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.212 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:22:25.212 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:22:25.212 00:22:25.212 --- 10.0.0.1 ping statistics --- 00:22:25.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.213 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1659987 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1659987 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1659987 ']' 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:25.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.213 [2024-07-25 05:42:18.780375] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:22:25.213 [2024-07-25 05:42:18.780465] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.213 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.213 [2024-07-25 05:42:18.852297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.504 [2024-07-25 05:42:18.941878] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.504 [2024-07-25 05:42:18.941937] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.504 [2024-07-25 05:42:18.941963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.504 [2024-07-25 05:42:18.941977] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.504 [2024-07-25 05:42:18.941989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.504 [2024-07-25 05:42:18.942029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.504 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.504 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:25.504 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.504 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.504 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.504 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.504 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:25.504 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:25.766 true 00:22:25.766 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:25.766 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:26.024 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:26.024 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:26.024 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:26.282 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:26.282 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:26.540 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:26.540 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:26.540 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:26.540 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:26.540 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:26.798 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:26.798 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:26.798 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:26.798 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:27.056 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:27.056 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:27.056 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:27.315 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.315 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:27.573 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:27.573 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:27.573 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:27.831 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.831 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:28.089 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:28.089 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:28.089 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:28.089 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:28.089 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.089 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:28.089 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:28.089 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:28.089 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.IAjn8Mibgg 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.f3ZpYZR2Ni 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.IAjn8Mibgg 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.f3ZpYZR2Ni 00:22:28.348 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:28.606 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:28.864 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.IAjn8Mibgg 00:22:28.864 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IAjn8Mibgg 00:22:28.864 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:29.122 [2024-07-25 05:42:22.683264] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.122 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:29.380 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:29.638 [2024-07-25 05:42:23.180817] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:29.638 [2024-07-25 05:42:23.181093] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.638 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:29.895 malloc0 00:22:29.895 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:30.152 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IAjn8Mibgg 00:22:30.410 [2024-07-25 05:42:23.947110] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:30.410 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.IAjn8Mibgg 00:22:30.410 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.378 Initializing NVMe Controllers 00:22:40.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:40.378 Initialization complete. Launching workers. 00:22:40.378 ======================================================== 00:22:40.378 Latency(us) 00:22:40.378 Device Information : IOPS MiB/s Average min max 00:22:40.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7432.36 29.03 8613.78 1295.48 9666.34 00:22:40.378 ======================================================== 00:22:40.378 Total : 7432.36 29.03 8613.78 1295.48 9666.34 00:22:40.378 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IAjn8Mibgg 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IAjn8Mibgg' 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1661766 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1661766 /var/tmp/bdevperf.sock 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1661766 ']' 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.636 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.636 [2024-07-25 05:42:34.124157] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:22:40.637 [2024-07-25 05:42:34.124253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661766 ] 00:22:40.637 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.637 [2024-07-25 05:42:34.181571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.637 [2024-07-25 05:42:34.264958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.894 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.894 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:40.894 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IAjn8Mibgg 00:22:41.152 [2024-07-25 05:42:34.648020] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:41.152 [2024-07-25 05:42:34.648134] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:41.152 TLSTESTn1 00:22:41.152 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:41.152 Running I/O for 10 seconds... 00:22:53.347 00:22:53.347 Latency(us) 00:22:53.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.347 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:53.347 Verification LBA range: start 0x0 length 0x2000 00:22:53.347 TLSTESTn1 : 10.04 3207.43 12.53 0.00 0.00 39811.84 9417.77 73011.96 00:22:53.347 =================================================================================================================== 00:22:53.347 Total : 3207.43 12.53 0.00 0.00 39811.84 9417.77 73011.96 00:22:53.347 0 00:22:53.347 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:53.347 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1661766 00:22:53.347 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1661766 ']' 00:22:53.347 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1661766 00:22:53.347 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:53.347 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.347 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1661766 00:22:53.347 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:53.347 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:53.347 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1661766' 00:22:53.347 killing process with pid 1661766 00:22:53.348 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1661766 00:22:53.348 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.348 00:22:53.348 Latency(us) 00:22:53.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.348 =================================================================================================================== 00:22:53.348 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:53.348 [2024-07-25 05:42:44.952631] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:53.348 05:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1661766 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f3ZpYZR2Ni 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f3ZpYZR2Ni 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f3ZpYZR2Ni 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.f3ZpYZR2Ni' 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1663081 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1663081 /var/tmp/bdevperf.sock 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1663081 ']' 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.348 [2024-07-25 05:42:45.217962] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:22:53.348 [2024-07-25 05:42:45.218060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663081 ] 00:22:53.348 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.348 [2024-07-25 05:42:45.275142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.348 [2024-07-25 05:42:45.356333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.f3ZpYZR2Ni 00:22:53.348 [2024-07-25 05:42:45.690824] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.348 [2024-07-25 05:42:45.690947] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.348 [2024-07-25 05:42:45.700421] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:53.348 [2024-07-25 05:42:45.700800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad0ab0 (107): Transport endpoint is not connected 00:22:53.348 [2024-07-25 05:42:45.701777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad0ab0 (9): Bad file descriptor 00:22:53.348 [2024-07-25 05:42:45.702781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.348 [2024-07-25 05:42:45.702802] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:53.348 [2024-07-25 05:42:45.702828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.348 request: 00:22:53.348 { 00:22:53.348 "name": "TLSTEST", 00:22:53.348 "trtype": "tcp", 00:22:53.348 "traddr": "10.0.0.2", 00:22:53.348 "adrfam": "ipv4", 00:22:53.348 "trsvcid": "4420", 00:22:53.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.348 "prchk_reftag": false, 00:22:53.348 "prchk_guard": false, 00:22:53.348 "hdgst": false, 00:22:53.348 "ddgst": false, 00:22:53.348 "psk": "/tmp/tmp.f3ZpYZR2Ni", 00:22:53.348 "method": "bdev_nvme_attach_controller", 00:22:53.348 "req_id": 1 00:22:53.348 } 00:22:53.348 Got JSON-RPC error response 00:22:53.348 response: 00:22:53.348 { 00:22:53.348 "code": -5, 00:22:53.348 "message": "Input/output error" 00:22:53.348 } 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1663081 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1663081 ']' 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1663081 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1663081 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1663081' 00:22:53.348 killing process with pid 1663081 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1663081 00:22:53.348 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.348 00:22:53.348 Latency(us) 00:22:53.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.348 =================================================================================================================== 00:22:53.348 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.348 [2024-07-25 05:42:45.746055] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1663081 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IAjn8Mibgg 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IAjn8Mibgg 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IAjn8Mibgg 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IAjn8Mibgg' 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1663209 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.348 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1663209 /var/tmp/bdevperf.sock 00:22:53.349 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1663209 ']' 00:22:53.349 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.349 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:53.349 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.349 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:53.349 05:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.349 [2024-07-25 05:42:45.982031] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:22:53.349 [2024-07-25 05:42:45.982123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663209 ] 00:22:53.349 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.349 [2024-07-25 05:42:46.041454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.349 [2024-07-25 05:42:46.128146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.IAjn8Mibgg 00:22:53.349 [2024-07-25 05:42:46.447745] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.349 [2024-07-25 05:42:46.447877] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.349 [2024-07-25 05:42:46.453093] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:53.349 [2024-07-25 05:42:46.453135] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:53.349 [2024-07-25 05:42:46.453204] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:53.349 [2024-07-25 05:42:46.453686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d2ab0 (107): Transport endpoint is not connected 00:22:53.349 [2024-07-25 05:42:46.454673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d2ab0 (9): Bad file descriptor 00:22:53.349 [2024-07-25 05:42:46.455672] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.349 [2024-07-25 05:42:46.455694] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:53.349 [2024-07-25 05:42:46.455718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.349 request: 00:22:53.349 { 00:22:53.349 "name": "TLSTEST", 00:22:53.349 "trtype": "tcp", 00:22:53.349 "traddr": "10.0.0.2", 00:22:53.349 "adrfam": "ipv4", 00:22:53.349 "trsvcid": "4420", 00:22:53.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.349 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:53.349 "prchk_reftag": false, 00:22:53.349 "prchk_guard": false, 00:22:53.349 "hdgst": false, 00:22:53.349 "ddgst": false, 00:22:53.349 "psk": "/tmp/tmp.IAjn8Mibgg", 00:22:53.349 "method": "bdev_nvme_attach_controller", 00:22:53.349 "req_id": 1 00:22:53.349 } 00:22:53.349 Got JSON-RPC error response 00:22:53.349 response: 00:22:53.349 { 00:22:53.349 "code": -5, 00:22:53.349 "message": "Input/output error" 00:22:53.349 } 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1663209 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1663209 ']' 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1663209 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1663209 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1663209' 00:22:53.349 killing process with pid 1663209 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1663209 00:22:53.349 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.349 00:22:53.349 Latency(us) 00:22:53.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.349 =================================================================================================================== 00:22:53.349 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.349 [2024-07-25 05:42:46.508440] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1663209 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IAjn8Mibgg 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IAjn8Mibgg 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IAjn8Mibgg 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IAjn8Mibgg' 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1663231 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1663231 /var/tmp/bdevperf.sock 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1663231 ']' 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:53.349 05:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.349 [2024-07-25 05:42:46.775521] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:22:53.349 [2024-07-25 05:42:46.775614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663231 ] 00:22:53.349 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.349 [2024-07-25 05:42:46.833593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.349 [2024-07-25 05:42:46.914801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:53.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IAjn8Mibgg 00:22:53.607 [2024-07-25 05:42:47.253256] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.607 [2024-07-25 05:42:47.253375] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.607 [2024-07-25 05:42:47.264372] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:53.607 [2024-07-25 05:42:47.264405] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:53.607 [2024-07-25 05:42:47.264448] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:53.607 [2024-07-25 05:42:47.265246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2006ab0 (107): Transport endpoint is not connected 00:22:53.607 [2024-07-25 05:42:47.266214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2006ab0 (9): Bad file descriptor 00:22:53.607 [2024-07-25 05:42:47.267213] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:53.607 [2024-07-25 05:42:47.267253] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:53.607 [2024-07-25 05:42:47.267272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:53.607 request: 00:22:53.607 { 00:22:53.607 "name": "TLSTEST", 00:22:53.607 "trtype": "tcp", 00:22:53.607 "traddr": "10.0.0.2", 00:22:53.607 "adrfam": "ipv4", 00:22:53.607 "trsvcid": "4420", 00:22:53.607 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:53.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.607 "prchk_reftag": false, 00:22:53.607 "prchk_guard": false, 00:22:53.607 "hdgst": false, 00:22:53.607 "ddgst": false, 00:22:53.607 "psk": "/tmp/tmp.IAjn8Mibgg", 00:22:53.607 "method": "bdev_nvme_attach_controller", 00:22:53.607 "req_id": 1 00:22:53.607 } 00:22:53.607 Got JSON-RPC error response 00:22:53.607 response: 00:22:53.607 { 00:22:53.607 "code": -5, 00:22:53.607 "message": "Input/output error" 00:22:53.607 } 00:22:53.607 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1663231 00:22:53.607 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1663231 ']' 00:22:53.607 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1663231 00:22:53.607 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:53.607 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.607 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1663231 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1663231' 00:22:53.865 killing process with pid 1663231 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1663231 00:22:53.865 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.865 00:22:53.865 Latency(us) 00:22:53.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.865 =================================================================================================================== 00:22:53.865 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.865 [2024-07-25 05:42:47.310576] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1663231 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1663372 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1663372 /var/tmp/bdevperf.sock 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1663372 ']' 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:53.865 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.865 [2024-07-25 05:42:47.544715] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:22:53.865 [2024-07-25 05:42:47.544802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663372 ] 00:22:54.147 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.147 [2024-07-25 05:42:47.606014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.147 [2024-07-25 05:42:47.691453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.147 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:54.147 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:54.147 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:54.406 [2024-07-25 05:42:48.016470] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:54.406 [2024-07-25 05:42:48.018398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b31e60 (9): Bad file descriptor 00:22:54.406 [2024-07-25 05:42:48.019393] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:54.406 [2024-07-25 05:42:48.019416] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:54.406 [2024-07-25 05:42:48.019433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.406 request: 00:22:54.406 { 00:22:54.406 "name": "TLSTEST", 00:22:54.406 "trtype": "tcp", 00:22:54.406 "traddr": "10.0.0.2", 00:22:54.406 "adrfam": "ipv4", 00:22:54.406 "trsvcid": "4420", 00:22:54.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.406 "prchk_reftag": false, 00:22:54.406 "prchk_guard": false, 00:22:54.406 "hdgst": false, 00:22:54.406 "ddgst": false, 00:22:54.406 "method": "bdev_nvme_attach_controller", 00:22:54.406 "req_id": 1 00:22:54.406 } 00:22:54.406 Got JSON-RPC error response 00:22:54.406 response: 00:22:54.406 { 00:22:54.406 "code": -5, 00:22:54.406 "message": "Input/output error" 00:22:54.406 } 00:22:54.406 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1663372 00:22:54.406 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1663372 ']' 00:22:54.406 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1663372 00:22:54.406 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:54.406 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.406 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1663372 00:22:54.406 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:54.406 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:54.406 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1663372' 00:22:54.406 killing process with pid 1663372 00:22:54.406 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1663372 00:22:54.406 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.406 00:22:54.406 Latency(us) 00:22:54.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.406 =================================================================================================================== 00:22:54.406 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.406 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1663372 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1659987 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1659987 ']' 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1659987 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1659987 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1659987' 00:22:54.664 killing process with pid 1659987 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1659987 00:22:54.664 [2024-07-25 05:42:48.308420] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:54.664 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1659987 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Ksjwacg6W6 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Ksjwacg6W6 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1663520 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1663520 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1663520 ']' 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.923 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.182 [2024-07-25 05:42:48.662320] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:22:55.182 [2024-07-25 05:42:48.662416] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.182 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.182 [2024-07-25 05:42:48.729196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.182 [2024-07-25 05:42:48.816535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.182 [2024-07-25 05:42:48.816604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.182 [2024-07-25 05:42:48.816621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.182 [2024-07-25 05:42:48.816635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.182 [2024-07-25 05:42:48.816648] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.182 [2024-07-25 05:42:48.816683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.440 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:55.440 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:55.440 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:55.440 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:55.440 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.440 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.440 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Ksjwacg6W6 00:22:55.440 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ksjwacg6W6 00:22:55.440 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.698 [2024-07-25 05:42:49.181823] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.698 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:55.957 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:56.214 [2024-07-25 05:42:49.691256] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:56.214 [2024-07-25 05:42:49.691532] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.214 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:56.472 malloc0 00:22:56.472 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:56.730 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ksjwacg6W6 00:22:56.988 [2024-07-25 05:42:50.444668] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ksjwacg6W6 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ksjwacg6W6' 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1663799 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1663799 /var/tmp/bdevperf.sock 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1663799 ']' 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:56.988 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.988 [2024-07-25 05:42:50.509272] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:22:56.988 [2024-07-25 05:42:50.509364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663799 ] 00:22:56.988 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.988 [2024-07-25 05:42:50.567089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.988 [2024-07-25 05:42:50.651161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.247 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:57.247 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:57.247 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ksjwacg6W6 00:22:57.505 [2024-07-25 05:42:50.988530] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.505 [2024-07-25 05:42:50.988670] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:57.505 TLSTESTn1 00:22:57.505 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:57.505 Running I/O for 10 seconds... 00:23:09.702 00:23:09.702 Latency(us) 00:23:09.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.702 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:09.702 Verification LBA range: start 0x0 length 0x2000 00:23:09.702 TLSTESTn1 : 10.05 2339.70 9.14 0.00 0.00 54552.71 6893.42 85439.53 00:23:09.702 =================================================================================================================== 00:23:09.702 Total : 2339.70 9.14 0.00 0.00 54552.71 6893.42 85439.53 00:23:09.702 0 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1663799 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1663799 ']' 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1663799 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1663799 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1663799' 00:23:09.702 killing process with pid 1663799 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1663799 00:23:09.702 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.702 00:23:09.702 Latency(us) 00:23:09.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.702 =================================================================================================================== 00:23:09.702 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.702 [2024-07-25 05:43:01.302450] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1663799 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Ksjwacg6W6 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ksjwacg6W6 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ksjwacg6W6 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ksjwacg6W6 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ksjwacg6W6' 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1664998 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1664998 /var/tmp/bdevperf.sock 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1664998 ']' 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.702 [2024-07-25 05:43:01.579629] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:09.702 [2024-07-25 05:43:01.579720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664998 ] 00:23:09.702 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.702 [2024-07-25 05:43:01.636941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.702 [2024-07-25 05:43:01.717441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:09.702 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ksjwacg6W6 00:23:09.702 [2024-07-25 05:43:02.047705] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.702 [2024-07-25 05:43:02.047784] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:09.702 [2024-07-25 05:43:02.047799] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Ksjwacg6W6 00:23:09.702 request: 00:23:09.702 { 00:23:09.702 "name": "TLSTEST", 00:23:09.702 "trtype": "tcp", 00:23:09.702 "traddr": "10.0.0.2", 00:23:09.702 "adrfam": "ipv4", 00:23:09.702 "trsvcid": "4420", 00:23:09.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.702 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.702 "prchk_reftag": false, 00:23:09.702 "prchk_guard": false, 00:23:09.702 "hdgst": false, 00:23:09.702 "ddgst": false, 00:23:09.702 "psk": "/tmp/tmp.Ksjwacg6W6", 00:23:09.702 "method": "bdev_nvme_attach_controller", 00:23:09.702 "req_id": 1 00:23:09.702 } 00:23:09.702 Got JSON-RPC error response 00:23:09.702 response: 00:23:09.702 { 00:23:09.702 "code": -1, 00:23:09.702 "message": "Operation not permitted" 00:23:09.702 } 00:23:09.702 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1664998 00:23:09.702 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1664998 ']' 00:23:09.702 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1664998 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1664998 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1664998' 00:23:09.703 killing process with pid 1664998 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1664998 00:23:09.703 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.703 00:23:09.703 Latency(us) 00:23:09.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.703 =================================================================================================================== 00:23:09.703 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1664998 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1663520 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1663520 ']' 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1663520 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1663520 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1663520' 00:23:09.703 killing process with pid 1663520 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1663520 00:23:09.703 [2024-07-25 05:43:02.312403] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1663520 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1665145 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1665145 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1665145 ']' 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.703 [2024-07-25 05:43:02.589108] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:09.703 [2024-07-25 05:43:02.589209] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.703 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.703 [2024-07-25 05:43:02.652878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.703 [2024-07-25 05:43:02.739556] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.703 [2024-07-25 05:43:02.739621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.703 [2024-07-25 05:43:02.739649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.703 [2024-07-25 05:43:02.739661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.703 [2024-07-25 05:43:02.739671] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.703 [2024-07-25 05:43:02.739699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Ksjwacg6W6 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Ksjwacg6W6 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.Ksjwacg6W6 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ksjwacg6W6 00:23:09.703 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:09.703 [2024-07-25 05:43:03.157033] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.703 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:09.960 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:09.960 [2024-07-25 05:43:03.658372] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:09.960 [2024-07-25 05:43:03.658633] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.217 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:10.474 malloc0 00:23:10.474 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:10.733 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ksjwacg6W6 00:23:10.991 [2024-07-25 05:43:04.503601] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:10.991 [2024-07-25 05:43:04.503640] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:10.991 [2024-07-25 05:43:04.503685] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:10.991 request: 00:23:10.991 { 00:23:10.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.991 "host": "nqn.2016-06.io.spdk:host1", 00:23:10.992 "psk": "/tmp/tmp.Ksjwacg6W6", 00:23:10.992 "method": "nvmf_subsystem_add_host", 00:23:10.992 "req_id": 1 00:23:10.992 } 00:23:10.992 Got JSON-RPC error response 00:23:10.992 response: 00:23:10.992 { 00:23:10.992 "code": -32603, 00:23:10.992 "message": "Internal error" 00:23:10.992 } 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1665145 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1665145 ']' 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1665145 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1665145 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1665145' 00:23:10.992 killing process with pid 1665145 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1665145 00:23:10.992 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1665145 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Ksjwacg6W6 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1665436 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1665436 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1665436 ']' 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:11.250 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.250 [2024-07-25 05:43:04.858386] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:11.251 [2024-07-25 05:43:04.858464] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.251 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.251 [2024-07-25 05:43:04.925236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.509 [2024-07-25 05:43:05.012555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.509 [2024-07-25 05:43:05.012618] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.509 [2024-07-25 05:43:05.012635] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.509 [2024-07-25 05:43:05.012648] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.509 [2024-07-25 05:43:05.012660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.509 [2024-07-25 05:43:05.012703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.509 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.509 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:11.509 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:11.509 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:11.509 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.509 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.509 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Ksjwacg6W6 00:23:11.509 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ksjwacg6W6 00:23:11.509 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:11.768 [2024-07-25 05:43:05.405424] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.768 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:12.026 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:12.284 [2024-07-25 05:43:05.894779] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:12.284 [2024-07-25 05:43:05.895034] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.284 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:12.542 malloc0 00:23:12.542 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:12.801 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ksjwacg6W6 00:23:13.059 [2024-07-25 05:43:06.640926] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:13.059 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1665715 00:23:13.059 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.059 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.059 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1665715 /var/tmp/bdevperf.sock 00:23:13.059 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1665715 ']' 00:23:13.059 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.059 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:13.059 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.059 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:13.059 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.059 [2024-07-25 05:43:06.701917] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:13.059 [2024-07-25 05:43:06.702013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665715 ] 00:23:13.059 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.318 [2024-07-25 05:43:06.767741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.318 [2024-07-25 05:43:06.857227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.318 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:13.318 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:13.318 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ksjwacg6W6 00:23:13.577 [2024-07-25 05:43:07.191025] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.577 [2024-07-25 05:43:07.191145] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:13.835 TLSTESTn1 00:23:13.835 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:14.093 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:14.093 "subsystems": [ 00:23:14.093 { 00:23:14.093 "subsystem": "keyring", 00:23:14.093 "config": [] 00:23:14.093 }, 00:23:14.093 { 00:23:14.093 "subsystem": "iobuf", 00:23:14.093 "config": [ 00:23:14.093 { 00:23:14.093 "method": "iobuf_set_options", 00:23:14.093 "params": { 00:23:14.093 "small_pool_count": 8192, 00:23:14.093 "large_pool_count": 1024, 00:23:14.093 "small_bufsize": 8192, 00:23:14.093 "large_bufsize": 135168 00:23:14.093 } 00:23:14.093 } 00:23:14.093 ] 00:23:14.093 }, 00:23:14.093 { 00:23:14.093 "subsystem": "sock", 00:23:14.093 "config": [ 00:23:14.093 { 00:23:14.093 "method": "sock_set_default_impl", 00:23:14.093 "params": { 00:23:14.093 "impl_name": "posix" 00:23:14.093 } 00:23:14.093 }, 00:23:14.093 { 00:23:14.093 "method": "sock_impl_set_options", 00:23:14.093 "params": { 00:23:14.093 "impl_name": "ssl", 00:23:14.093 "recv_buf_size": 4096, 00:23:14.093 "send_buf_size": 4096, 00:23:14.093 "enable_recv_pipe": true, 00:23:14.093 "enable_quickack": false, 00:23:14.093 "enable_placement_id": 0, 00:23:14.093 "enable_zerocopy_send_server": true, 00:23:14.093 "enable_zerocopy_send_client": false, 00:23:14.093 "zerocopy_threshold": 0, 00:23:14.093 "tls_version": 0, 00:23:14.093 "enable_ktls": false 00:23:14.093 } 00:23:14.093 }, 00:23:14.093 { 00:23:14.093 "method": "sock_impl_set_options", 00:23:14.093 "params": { 00:23:14.093 "impl_name": "posix", 00:23:14.094 "recv_buf_size": 2097152, 00:23:14.094 "send_buf_size": 2097152, 00:23:14.094 "enable_recv_pipe": true, 00:23:14.094 "enable_quickack": false, 00:23:14.094 "enable_placement_id": 0, 00:23:14.094 "enable_zerocopy_send_server": true, 00:23:14.094 "enable_zerocopy_send_client": false, 00:23:14.094 "zerocopy_threshold": 0, 00:23:14.094 "tls_version": 0, 00:23:14.094 "enable_ktls": false 00:23:14.094 } 00:23:14.094 } 00:23:14.094 ] 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "subsystem": "vmd", 00:23:14.094 "config": [] 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "subsystem": "accel", 00:23:14.094 "config": [ 00:23:14.094 { 00:23:14.094 "method": "accel_set_options", 00:23:14.094 "params": { 00:23:14.094 "small_cache_size": 128, 00:23:14.094 "large_cache_size": 16, 00:23:14.094 "task_count": 2048, 00:23:14.094 "sequence_count": 2048, 00:23:14.094 "buf_count": 2048 00:23:14.094 } 00:23:14.094 } 00:23:14.094 ] 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "subsystem": "bdev", 00:23:14.094 "config": [ 00:23:14.094 { 00:23:14.094 "method": "bdev_set_options", 00:23:14.094 "params": { 00:23:14.094 "bdev_io_pool_size": 65535, 00:23:14.094 "bdev_io_cache_size": 256, 00:23:14.094 "bdev_auto_examine": true, 00:23:14.094 "iobuf_small_cache_size": 128, 00:23:14.094 "iobuf_large_cache_size": 16 00:23:14.094 } 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "method": "bdev_raid_set_options", 00:23:14.094 "params": { 00:23:14.094 "process_window_size_kb": 1024, 00:23:14.094 "process_max_bandwidth_mb_sec": 0 00:23:14.094 } 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "method": "bdev_iscsi_set_options", 00:23:14.094 "params": { 00:23:14.094 "timeout_sec": 30 00:23:14.094 } 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "method": "bdev_nvme_set_options", 00:23:14.094 "params": { 00:23:14.094 "action_on_timeout": "none", 00:23:14.094 "timeout_us": 0, 00:23:14.094 "timeout_admin_us": 0, 00:23:14.094 "keep_alive_timeout_ms": 10000, 00:23:14.094 "arbitration_burst": 0, 00:23:14.094 "low_priority_weight": 0, 00:23:14.094 "medium_priority_weight": 0, 00:23:14.094 "high_priority_weight": 0, 00:23:14.094 "nvme_adminq_poll_period_us": 10000, 00:23:14.094 "nvme_ioq_poll_period_us": 0, 00:23:14.094 "io_queue_requests": 0, 00:23:14.094 "delay_cmd_submit": true, 00:23:14.094 "transport_retry_count": 4, 00:23:14.094 "bdev_retry_count": 3, 00:23:14.094 "transport_ack_timeout": 0, 00:23:14.094 "ctrlr_loss_timeout_sec": 0, 00:23:14.094 "reconnect_delay_sec": 0, 00:23:14.094 "fast_io_fail_timeout_sec": 0, 00:23:14.094 "disable_auto_failback": false, 00:23:14.094 "generate_uuids": false, 00:23:14.094 "transport_tos": 0, 00:23:14.094 "nvme_error_stat": false, 00:23:14.094 "rdma_srq_size": 0, 00:23:14.094 "io_path_stat": false, 00:23:14.094 "allow_accel_sequence": false, 00:23:14.094 "rdma_max_cq_size": 0, 00:23:14.094 "rdma_cm_event_timeout_ms": 0, 00:23:14.094 "dhchap_digests": [ 00:23:14.094 "sha256", 00:23:14.094 "sha384", 00:23:14.094 "sha512" 00:23:14.094 ], 00:23:14.094 "dhchap_dhgroups": [ 00:23:14.094 "null", 00:23:14.094 "ffdhe2048", 00:23:14.094 "ffdhe3072", 00:23:14.094 "ffdhe4096", 00:23:14.094 "ffdhe6144", 00:23:14.094 "ffdhe8192" 00:23:14.094 ] 00:23:14.094 } 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "method": "bdev_nvme_set_hotplug", 00:23:14.094 "params": { 00:23:14.094 "period_us": 100000, 00:23:14.094 "enable": false 00:23:14.094 } 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "method": "bdev_malloc_create", 00:23:14.094 "params": { 00:23:14.094 "name": "malloc0", 00:23:14.094 "num_blocks": 8192, 00:23:14.094 "block_size": 4096, 00:23:14.094 "physical_block_size": 4096, 00:23:14.094 "uuid": "3eb3a544-ef2f-4618-aabd-8cc778895bbc", 00:23:14.094 "optimal_io_boundary": 0, 00:23:14.094 "md_size": 0, 00:23:14.094 "dif_type": 0, 00:23:14.094 "dif_is_head_of_md": false, 00:23:14.094 "dif_pi_format": 0 00:23:14.094 } 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "method": "bdev_wait_for_examine" 00:23:14.094 } 00:23:14.094 ] 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "subsystem": "nbd", 00:23:14.094 "config": [] 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "subsystem": "scheduler", 00:23:14.094 "config": [ 00:23:14.094 { 00:23:14.094 "method": "framework_set_scheduler", 00:23:14.094 "params": { 00:23:14.094 "name": "static" 00:23:14.094 } 00:23:14.094 } 00:23:14.094 ] 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "subsystem": "nvmf", 00:23:14.094 "config": [ 00:23:14.094 { 00:23:14.094 "method": "nvmf_set_config", 00:23:14.094 "params": { 00:23:14.094 "discovery_filter": "match_any", 00:23:14.094 "admin_cmd_passthru": { 00:23:14.094 "identify_ctrlr": false 00:23:14.094 } 00:23:14.094 } 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "method": "nvmf_set_max_subsystems", 00:23:14.094 "params": { 00:23:14.094 "max_subsystems": 1024 00:23:14.094 } 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "method": "nvmf_set_crdt", 00:23:14.094 "params": { 00:23:14.094 "crdt1": 0, 00:23:14.094 "crdt2": 0, 00:23:14.094 "crdt3": 0 00:23:14.094 } 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "method": "nvmf_create_transport", 00:23:14.094 "params": { 00:23:14.094 "trtype": "TCP", 00:23:14.094 "max_queue_depth": 128, 00:23:14.094 "max_io_qpairs_per_ctrlr": 127, 00:23:14.094 "in_capsule_data_size": 4096, 00:23:14.094 "max_io_size": 131072, 00:23:14.094 "io_unit_size": 131072, 00:23:14.094 "max_aq_depth": 128, 00:23:14.094 "num_shared_buffers": 511, 00:23:14.094 "buf_cache_size": 4294967295, 00:23:14.094 "dif_insert_or_strip": false, 00:23:14.094 "zcopy": false, 00:23:14.094 "c2h_success": false, 00:23:14.094 "sock_priority": 0, 00:23:14.094 "abort_timeout_sec": 1, 00:23:14.094 "ack_timeout": 0, 00:23:14.094 "data_wr_pool_size": 0 00:23:14.094 } 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "method": "nvmf_create_subsystem", 00:23:14.094 "params": { 00:23:14.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.094 "allow_any_host": false, 00:23:14.094 "serial_number": "SPDK00000000000001", 00:23:14.094 "model_number": "SPDK bdev Controller", 00:23:14.094 "max_namespaces": 10, 00:23:14.094 "min_cntlid": 1, 00:23:14.094 "max_cntlid": 65519, 00:23:14.094 "ana_reporting": false 00:23:14.094 } 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "method": "nvmf_subsystem_add_host", 00:23:14.094 "params": { 00:23:14.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.094 "host": "nqn.2016-06.io.spdk:host1", 00:23:14.094 "psk": "/tmp/tmp.Ksjwacg6W6" 00:23:14.094 } 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "method": "nvmf_subsystem_add_ns", 00:23:14.094 "params": { 00:23:14.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.094 "namespace": { 00:23:14.094 "nsid": 1, 00:23:14.094 "bdev_name": "malloc0", 00:23:14.094 "nguid": "3EB3A544EF2F4618AABD8CC778895BBC", 00:23:14.094 "uuid": "3eb3a544-ef2f-4618-aabd-8cc778895bbc", 00:23:14.094 "no_auto_visible": false 00:23:14.094 } 00:23:14.094 } 00:23:14.094 }, 00:23:14.094 { 00:23:14.094 "method": "nvmf_subsystem_add_listener", 00:23:14.094 "params": { 00:23:14.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.094 "listen_address": { 00:23:14.094 "trtype": "TCP", 00:23:14.094 "adrfam": "IPv4", 00:23:14.094 "traddr": "10.0.0.2", 00:23:14.094 "trsvcid": "4420" 00:23:14.095 }, 00:23:14.095 "secure_channel": true 00:23:14.095 } 00:23:14.095 } 00:23:14.095 ] 00:23:14.095 } 00:23:14.095 ] 00:23:14.095 }' 00:23:14.095 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:14.353 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:14.353 "subsystems": [ 00:23:14.353 { 00:23:14.353 "subsystem": "keyring", 00:23:14.353 "config": [] 00:23:14.353 }, 00:23:14.353 { 00:23:14.353 "subsystem": "iobuf", 00:23:14.353 "config": [ 00:23:14.353 { 00:23:14.353 "method": "iobuf_set_options", 00:23:14.353 "params": { 00:23:14.353 "small_pool_count": 8192, 00:23:14.353 "large_pool_count": 1024, 00:23:14.353 "small_bufsize": 8192, 00:23:14.353 "large_bufsize": 135168 00:23:14.353 } 00:23:14.353 } 00:23:14.353 ] 00:23:14.353 }, 00:23:14.353 { 00:23:14.353 "subsystem": "sock", 00:23:14.353 "config": [ 00:23:14.353 { 00:23:14.353 "method": "sock_set_default_impl", 00:23:14.353 "params": { 00:23:14.353 "impl_name": "posix" 00:23:14.353 } 00:23:14.353 }, 00:23:14.353 { 00:23:14.353 "method": "sock_impl_set_options", 00:23:14.353 "params": { 00:23:14.353 "impl_name": "ssl", 00:23:14.353 "recv_buf_size": 4096, 00:23:14.353 "send_buf_size": 4096, 00:23:14.353 "enable_recv_pipe": true, 00:23:14.353 "enable_quickack": false, 00:23:14.353 "enable_placement_id": 0, 00:23:14.353 "enable_zerocopy_send_server": true, 00:23:14.353 "enable_zerocopy_send_client": false, 00:23:14.353 "zerocopy_threshold": 0, 00:23:14.353 "tls_version": 0, 00:23:14.353 "enable_ktls": false 00:23:14.353 } 00:23:14.353 }, 00:23:14.353 { 00:23:14.353 "method": "sock_impl_set_options", 00:23:14.353 "params": { 00:23:14.353 "impl_name": "posix", 00:23:14.353 "recv_buf_size": 2097152, 00:23:14.353 "send_buf_size": 2097152, 00:23:14.353 "enable_recv_pipe": true, 00:23:14.353 "enable_quickack": false, 00:23:14.353 "enable_placement_id": 0, 00:23:14.353 "enable_zerocopy_send_server": true, 00:23:14.353 "enable_zerocopy_send_client": false, 00:23:14.353 "zerocopy_threshold": 0, 00:23:14.353 "tls_version": 0, 00:23:14.353 "enable_ktls": false 00:23:14.353 } 00:23:14.353 } 00:23:14.353 ] 00:23:14.353 }, 00:23:14.353 { 00:23:14.353 "subsystem": "vmd", 00:23:14.353 "config": [] 00:23:14.353 }, 00:23:14.353 { 00:23:14.353 "subsystem": "accel", 00:23:14.353 "config": [ 00:23:14.353 { 00:23:14.353 "method": "accel_set_options", 00:23:14.353 "params": { 00:23:14.353 "small_cache_size": 128, 00:23:14.353 "large_cache_size": 16, 00:23:14.353 "task_count": 2048, 00:23:14.353 "sequence_count": 2048, 00:23:14.353 "buf_count": 2048 00:23:14.353 } 00:23:14.353 } 00:23:14.353 ] 00:23:14.353 }, 00:23:14.353 { 00:23:14.353 "subsystem": "bdev", 00:23:14.353 "config": [ 00:23:14.353 { 00:23:14.353 "method": "bdev_set_options", 00:23:14.353 "params": { 00:23:14.353 "bdev_io_pool_size": 65535, 00:23:14.353 "bdev_io_cache_size": 256, 00:23:14.353 "bdev_auto_examine": true, 00:23:14.353 "iobuf_small_cache_size": 128, 00:23:14.353 "iobuf_large_cache_size": 16 00:23:14.353 } 00:23:14.353 }, 00:23:14.353 { 00:23:14.353 "method": "bdev_raid_set_options", 00:23:14.353 "params": { 00:23:14.353 "process_window_size_kb": 1024, 00:23:14.353 "process_max_bandwidth_mb_sec": 0 00:23:14.353 } 00:23:14.354 }, 00:23:14.354 { 00:23:14.354 "method": "bdev_iscsi_set_options", 00:23:14.354 "params": { 00:23:14.354 "timeout_sec": 30 00:23:14.354 } 00:23:14.354 }, 00:23:14.354 { 00:23:14.354 "method": "bdev_nvme_set_options", 00:23:14.354 "params": { 00:23:14.354 "action_on_timeout": "none", 00:23:14.354 "timeout_us": 0, 00:23:14.354 "timeout_admin_us": 0, 00:23:14.354 "keep_alive_timeout_ms": 10000, 00:23:14.354 "arbitration_burst": 0, 00:23:14.354 "low_priority_weight": 0, 00:23:14.354 "medium_priority_weight": 0, 00:23:14.354 "high_priority_weight": 0, 00:23:14.354 "nvme_adminq_poll_period_us": 10000, 00:23:14.354 "nvme_ioq_poll_period_us": 0, 00:23:14.354 "io_queue_requests": 512, 00:23:14.354 "delay_cmd_submit": true, 00:23:14.354 "transport_retry_count": 4, 00:23:14.354 "bdev_retry_count": 3, 00:23:14.354 "transport_ack_timeout": 0, 00:23:14.354 "ctrlr_loss_timeout_sec": 0, 00:23:14.354 "reconnect_delay_sec": 0, 00:23:14.354 "fast_io_fail_timeout_sec": 0, 00:23:14.354 "disable_auto_failback": false, 00:23:14.354 "generate_uuids": false, 00:23:14.354 "transport_tos": 0, 00:23:14.354 "nvme_error_stat": false, 00:23:14.354 "rdma_srq_size": 0, 00:23:14.354 "io_path_stat": false, 00:23:14.354 "allow_accel_sequence": false, 00:23:14.354 "rdma_max_cq_size": 0, 00:23:14.354 "rdma_cm_event_timeout_ms": 0, 00:23:14.354 "dhchap_digests": [ 00:23:14.354 "sha256", 00:23:14.354 "sha384", 00:23:14.354 "sha512" 00:23:14.354 ], 00:23:14.354 "dhchap_dhgroups": [ 00:23:14.354 "null", 00:23:14.354 "ffdhe2048", 00:23:14.354 "ffdhe3072", 00:23:14.354 "ffdhe4096", 00:23:14.354 "ffdhe6144", 00:23:14.354 "ffdhe8192" 00:23:14.354 ] 00:23:14.354 } 00:23:14.354 }, 00:23:14.354 { 00:23:14.354 "method": "bdev_nvme_attach_controller", 00:23:14.354 "params": { 00:23:14.354 "name": "TLSTEST", 00:23:14.354 "trtype": "TCP", 00:23:14.354 "adrfam": "IPv4", 00:23:14.354 "traddr": "10.0.0.2", 00:23:14.354 "trsvcid": "4420", 00:23:14.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.354 "prchk_reftag": false, 00:23:14.354 "prchk_guard": false, 00:23:14.354 "ctrlr_loss_timeout_sec": 0, 00:23:14.354 "reconnect_delay_sec": 0, 00:23:14.354 "fast_io_fail_timeout_sec": 0, 00:23:14.354 "psk": "/tmp/tmp.Ksjwacg6W6", 00:23:14.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.354 "hdgst": false, 00:23:14.354 "ddgst": false 00:23:14.354 } 00:23:14.354 }, 00:23:14.354 { 00:23:14.354 "method": "bdev_nvme_set_hotplug", 00:23:14.354 "params": { 00:23:14.354 "period_us": 100000, 00:23:14.354 "enable": false 00:23:14.354 } 00:23:14.354 }, 00:23:14.354 { 00:23:14.354 "method": "bdev_wait_for_examine" 00:23:14.354 } 00:23:14.354 ] 00:23:14.354 }, 00:23:14.354 { 00:23:14.354 "subsystem": "nbd", 00:23:14.354 "config": [] 00:23:14.354 } 00:23:14.354 ] 00:23:14.354 }' 00:23:14.354 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1665715 00:23:14.354 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1665715 ']' 00:23:14.354 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1665715 00:23:14.354 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:14.354 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.354 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1665715 00:23:14.354 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:14.354 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:14.354 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1665715' 00:23:14.354 killing process with pid 1665715 00:23:14.354 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1665715 00:23:14.354 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.354 00:23:14.354 Latency(us) 00:23:14.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.354 =================================================================================================================== 00:23:14.354 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:14.354 [2024-07-25 05:43:07.948829] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:14.354 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1665715 00:23:14.612 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1665436 00:23:14.612 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1665436 ']' 00:23:14.612 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1665436 00:23:14.613 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:14.613 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.613 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1665436 00:23:14.613 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:14.613 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:14.613 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1665436' 00:23:14.613 killing process with pid 1665436 00:23:14.613 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1665436 00:23:14.613 [2024-07-25 05:43:08.194348] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:14.613 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1665436 00:23:14.871 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:14.871 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:14.871 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:14.871 "subsystems": [ 00:23:14.871 { 00:23:14.871 "subsystem": "keyring", 00:23:14.871 "config": [] 00:23:14.871 }, 00:23:14.871 { 00:23:14.871 "subsystem": "iobuf", 00:23:14.871 "config": [ 00:23:14.871 { 00:23:14.871 "method": "iobuf_set_options", 00:23:14.871 "params": { 00:23:14.871 "small_pool_count": 8192, 00:23:14.871 "large_pool_count": 1024, 00:23:14.871 "small_bufsize": 8192, 00:23:14.871 "large_bufsize": 135168 00:23:14.871 } 00:23:14.871 } 00:23:14.871 ] 00:23:14.871 }, 00:23:14.871 { 00:23:14.871 "subsystem": "sock", 00:23:14.872 "config": [ 00:23:14.872 { 00:23:14.872 "method": "sock_set_default_impl", 00:23:14.872 "params": { 00:23:14.872 "impl_name": "posix" 00:23:14.872 } 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "method": "sock_impl_set_options", 00:23:14.872 "params": { 00:23:14.872 "impl_name": "ssl", 00:23:14.872 "recv_buf_size": 4096, 00:23:14.872 "send_buf_size": 4096, 00:23:14.872 "enable_recv_pipe": true, 00:23:14.872 "enable_quickack": false, 00:23:14.872 "enable_placement_id": 0, 00:23:14.872 "enable_zerocopy_send_server": true, 00:23:14.872 "enable_zerocopy_send_client": false, 00:23:14.872 "zerocopy_threshold": 0, 00:23:14.872 "tls_version": 0, 00:23:14.872 "enable_ktls": false 00:23:14.872 } 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "method": "sock_impl_set_options", 00:23:14.872 "params": { 00:23:14.872 "impl_name": "posix", 00:23:14.872 "recv_buf_size": 2097152, 00:23:14.872 "send_buf_size": 2097152, 00:23:14.872 "enable_recv_pipe": true, 00:23:14.872 "enable_quickack": false, 00:23:14.872 "enable_placement_id": 0, 00:23:14.872 "enable_zerocopy_send_server": true, 00:23:14.872 "enable_zerocopy_send_client": false, 00:23:14.872 "zerocopy_threshold": 0, 00:23:14.872 "tls_version": 0, 00:23:14.872 "enable_ktls": false 00:23:14.872 } 00:23:14.872 } 00:23:14.872 ] 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "subsystem": "vmd", 00:23:14.872 "config": [] 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "subsystem": "accel", 00:23:14.872 "config": [ 00:23:14.872 { 00:23:14.872 "method": "accel_set_options", 00:23:14.872 "params": { 00:23:14.872 "small_cache_size": 128, 00:23:14.872 "large_cache_size": 16, 00:23:14.872 "task_count": 2048, 00:23:14.872 "sequence_count": 2048, 00:23:14.872 "buf_count": 2048 00:23:14.872 } 00:23:14.872 } 00:23:14.872 ] 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "subsystem": "bdev", 00:23:14.872 "config": [ 00:23:14.872 { 00:23:14.872 "method": "bdev_set_options", 00:23:14.872 "params": { 00:23:14.872 "bdev_io_pool_size": 65535, 00:23:14.872 "bdev_io_cache_size": 256, 00:23:14.872 "bdev_auto_examine": true, 00:23:14.872 "iobuf_small_cache_size": 128, 00:23:14.872 "iobuf_large_cache_size": 16 00:23:14.872 } 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "method": "bdev_raid_set_options", 00:23:14.872 "params": { 00:23:14.872 "process_window_size_kb": 1024, 00:23:14.872 "process_max_bandwidth_mb_sec": 0 00:23:14.872 } 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "method": "bdev_iscsi_set_options", 00:23:14.872 "params": { 00:23:14.872 "timeout_sec": 30 00:23:14.872 } 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "method": "bdev_nvme_set_options", 00:23:14.872 "params": { 00:23:14.872 "action_on_timeout": "none", 00:23:14.872 "timeout_us": 0, 00:23:14.872 "timeout_admin_us": 0, 00:23:14.872 "keep_alive_timeout_ms": 10000, 00:23:14.872 "arbitration_burst": 0, 00:23:14.872 "low_priority_weight": 0, 00:23:14.872 "medium_priority_weight": 0, 00:23:14.872 "high_priority_weight": 0, 00:23:14.872 "nvme_adminq_poll_period_us": 10000, 00:23:14.872 "nvme_ioq_poll_period_us": 0, 00:23:14.872 "io_queue_requests": 0, 00:23:14.872 "delay_cmd_submit": true, 00:23:14.872 "transport_retry_count": 4, 00:23:14.872 "bdev_retry_count": 3, 00:23:14.872 "transport_ack_timeout": 0, 00:23:14.872 "ctrlr_loss_timeout_sec": 0, 00:23:14.872 "reconnect_delay_sec": 0, 00:23:14.872 "fast_io_fail_timeout_sec": 0, 00:23:14.872 "disable_auto_failback": false, 00:23:14.872 "generate_uuids": false, 00:23:14.872 "transport_tos": 0, 00:23:14.872 "nvme_error_stat": false, 00:23:14.872 "rdma_srq_size": 0, 00:23:14.872 "io_path_stat": false, 00:23:14.872 "allow_accel_sequence": false, 00:23:14.872 "rdma_max_cq_size": 0, 00:23:14.872 "rdma_cm_event_timeout_ms": 0, 00:23:14.872 "dhchap_digests": [ 00:23:14.872 "sha256", 00:23:14.872 "sha384", 00:23:14.872 "sha512" 00:23:14.872 ], 00:23:14.872 "dhchap_dhgroups": [ 00:23:14.872 "null", 00:23:14.872 "ffdhe2048", 00:23:14.872 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:14.872 "ffdhe3072", 00:23:14.872 "ffdhe4096", 00:23:14.872 "ffdhe6144", 00:23:14.872 "ffdhe8192" 00:23:14.872 ] 00:23:14.872 } 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "method": "bdev_nvme_set_hotplug", 00:23:14.872 "params": { 00:23:14.872 "period_us": 100000, 00:23:14.872 "enable": false 00:23:14.872 } 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "method": "bdev_malloc_create", 00:23:14.872 "params": { 00:23:14.872 "name": "malloc0", 00:23:14.872 "num_blocks": 8192, 00:23:14.872 "block_size": 4096, 00:23:14.872 "physical_block_size": 4096, 00:23:14.872 "uuid": "3eb3a544-ef2f-4618-aabd-8cc778895bbc", 00:23:14.872 "optimal_io_boundary": 0, 00:23:14.872 "md_size": 0, 00:23:14.872 "dif_type": 0, 00:23:14.872 "dif_is_head_of_md": false, 00:23:14.872 "dif_pi_format": 0 00:23:14.872 } 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "method": "bdev_wait_for_examine" 00:23:14.872 } 00:23:14.872 ] 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "subsystem": "nbd", 00:23:14.872 "config": [] 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "subsystem": "scheduler", 00:23:14.872 "config": [ 00:23:14.872 { 00:23:14.872 "method": "framework_set_scheduler", 00:23:14.872 "params": { 00:23:14.872 "name": "static" 00:23:14.872 } 00:23:14.872 } 00:23:14.872 ] 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "subsystem": "nvmf", 00:23:14.872 "config": [ 00:23:14.872 { 00:23:14.872 "method": "nvmf_set_config", 00:23:14.872 "params": { 00:23:14.872 "discovery_filter": "match_any", 00:23:14.872 "admin_cmd_passthru": { 00:23:14.872 "identify_ctrlr": false 00:23:14.872 } 00:23:14.872 } 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "method": "nvmf_set_max_subsystems", 00:23:14.872 "params": { 00:23:14.872 "max_subsystems": 1024 00:23:14.872 } 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "method": "nvmf_set_crdt", 00:23:14.872 "params": { 00:23:14.872 "crdt1": 0, 00:23:14.872 "crdt2": 0, 00:23:14.872 "crdt3": 0 00:23:14.872 } 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "method": "nvmf_create_transport", 00:23:14.872 "params": { 00:23:14.872 "trtype": "TCP", 00:23:14.872 "max_queue_depth": 128, 00:23:14.872 "max_io_qpairs_per_ctrlr": 127, 00:23:14.872 "in_capsule_data_size": 4096, 00:23:14.872 "max_io_size": 131072, 00:23:14.872 "io_unit_size": 131072, 00:23:14.872 "max_aq_depth": 128, 00:23:14.872 "num_shared_buffers": 511, 00:23:14.872 "buf_cache_size": 4294967295, 00:23:14.872 "dif_insert_or_strip": false, 00:23:14.872 "zcopy": false, 00:23:14.872 "c2h_success": false, 00:23:14.872 "sock_priority": 0, 00:23:14.872 "abort_timeout_sec": 1, 00:23:14.872 "ack_timeout": 0, 00:23:14.872 "data_wr_pool_size": 0 00:23:14.872 } 00:23:14.872 }, 00:23:14.872 { 00:23:14.872 "method": "nvmf_create_subsystem", 00:23:14.872 "params": { 00:23:14.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.872 "allow_any_host": false, 00:23:14.872 "serial_number": "SPDK00000000000001", 00:23:14.872 "model_number": "SPDK bdev Controller", 00:23:14.872 "max_namespaces": 10, 00:23:14.872 "min_cntlid": 1, 00:23:14.872 "max_cntlid": 65519, 00:23:14.872 "ana_reporting": false 00:23:14.872 } 00:23:14.872 }, 00:23:14.873 { 00:23:14.873 "method": "nvmf_subsystem_add_host", 00:23:14.873 "params": { 00:23:14.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.873 "host": "nqn.2016-06.io.spdk:host1", 00:23:14.873 "psk": "/tmp/tmp.Ksjwacg6W6" 00:23:14.873 } 00:23:14.873 }, 00:23:14.873 { 00:23:14.873 "method": "nvmf_subsystem_add_ns", 00:23:14.873 "params": { 00:23:14.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.873 "namespace": { 00:23:14.873 "nsid": 1, 00:23:14.873 "bdev_name": "malloc0", 00:23:14.873 "nguid": "3EB3A544EF2F4618AABD8CC778895BBC", 00:23:14.873 "uuid": "3eb3a544-ef2f-4618-aabd-8cc778895bbc", 00:23:14.873 "no_auto_visible": false 00:23:14.873 } 00:23:14.873 } 00:23:14.873 }, 00:23:14.873 { 00:23:14.873 "method": "nvmf_subsystem_add_listener", 00:23:14.873 "params": { 00:23:14.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.873 "listen_address": { 00:23:14.873 "trtype": "TCP", 00:23:14.873 "adrfam": "IPv4", 00:23:14.873 "traddr": "10.0.0.2", 00:23:14.873 "trsvcid": "4420" 00:23:14.873 }, 00:23:14.873 "secure_channel": true 00:23:14.873 } 00:23:14.873 } 00:23:14.873 ] 00:23:14.873 } 00:23:14.873 ] 00:23:14.873 }' 00:23:14.873 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.873 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1665878 00:23:14.873 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:14.873 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1665878 00:23:14.873 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1665878 ']' 00:23:14.873 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.873 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:14.873 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.873 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:14.873 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.873 [2024-07-25 05:43:08.495021] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:14.873 [2024-07-25 05:43:08.495106] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.873 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.873 [2024-07-25 05:43:08.562838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.132 [2024-07-25 05:43:08.650976] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.132 [2024-07-25 05:43:08.651042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.132 [2024-07-25 05:43:08.651059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.132 [2024-07-25 05:43:08.651072] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.132 [2024-07-25 05:43:08.651083] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.132 [2024-07-25 05:43:08.651191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.424 [2024-07-25 05:43:08.885849] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.424 [2024-07-25 05:43:08.910687] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:15.424 [2024-07-25 05:43:08.926733] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.424 [2024-07-25 05:43:08.926995] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.991 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:15.991 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:15.991 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.991 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:15.991 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.991 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.991 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1666033 00:23:15.991 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1666033 /var/tmp/bdevperf.sock 00:23:15.991 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1666033 ']' 00:23:15.991 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.991 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:15.991 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.991 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:15.991 "subsystems": [ 00:23:15.991 { 00:23:15.991 "subsystem": "keyring", 00:23:15.991 "config": [] 00:23:15.991 }, 00:23:15.991 { 00:23:15.991 "subsystem": "iobuf", 00:23:15.991 "config": [ 00:23:15.991 { 00:23:15.991 "method": "iobuf_set_options", 00:23:15.991 "params": { 00:23:15.991 "small_pool_count": 8192, 00:23:15.991 "large_pool_count": 1024, 00:23:15.991 "small_bufsize": 8192, 00:23:15.991 "large_bufsize": 135168 00:23:15.991 } 00:23:15.991 } 00:23:15.991 ] 00:23:15.991 }, 00:23:15.991 { 00:23:15.991 "subsystem": "sock", 00:23:15.991 "config": [ 00:23:15.991 { 00:23:15.991 "method": "sock_set_default_impl", 00:23:15.991 "params": { 00:23:15.991 "impl_name": "posix" 00:23:15.991 } 00:23:15.991 }, 00:23:15.991 { 00:23:15.991 "method": "sock_impl_set_options", 00:23:15.991 "params": { 00:23:15.991 "impl_name": "ssl", 00:23:15.991 "recv_buf_size": 4096, 00:23:15.991 "send_buf_size": 4096, 00:23:15.991 "enable_recv_pipe": true, 00:23:15.991 "enable_quickack": false, 00:23:15.991 "enable_placement_id": 0, 00:23:15.991 "enable_zerocopy_send_server": true, 00:23:15.991 "enable_zerocopy_send_client": false, 00:23:15.991 "zerocopy_threshold": 0, 00:23:15.991 "tls_version": 0, 00:23:15.991 "enable_ktls": false 00:23:15.991 } 00:23:15.991 }, 00:23:15.991 { 00:23:15.991 "method": "sock_impl_set_options", 00:23:15.991 "params": { 00:23:15.991 "impl_name": "posix", 00:23:15.991 "recv_buf_size": 2097152, 00:23:15.991 "send_buf_size": 2097152, 00:23:15.991 "enable_recv_pipe": true, 00:23:15.991 "enable_quickack": false, 00:23:15.991 "enable_placement_id": 0, 00:23:15.991 "enable_zerocopy_send_server": true, 00:23:15.991 "enable_zerocopy_send_client": false, 00:23:15.991 "zerocopy_threshold": 0, 00:23:15.991 "tls_version": 0, 00:23:15.991 "enable_ktls": false 00:23:15.991 } 00:23:15.991 } 00:23:15.991 ] 00:23:15.991 }, 00:23:15.991 { 00:23:15.991 "subsystem": "vmd", 00:23:15.991 "config": [] 00:23:15.991 }, 00:23:15.991 { 00:23:15.991 "subsystem": "accel", 00:23:15.991 "config": [ 00:23:15.991 { 00:23:15.991 "method": "accel_set_options", 00:23:15.991 "params": { 00:23:15.991 "small_cache_size": 128, 00:23:15.991 "large_cache_size": 16, 00:23:15.991 "task_count": 2048, 00:23:15.991 "sequence_count": 2048, 00:23:15.991 "buf_count": 2048 00:23:15.991 } 00:23:15.991 } 00:23:15.991 ] 00:23:15.991 }, 00:23:15.991 { 00:23:15.991 "subsystem": "bdev", 00:23:15.991 "config": [ 00:23:15.991 { 00:23:15.991 "method": "bdev_set_options", 00:23:15.991 "params": { 00:23:15.991 "bdev_io_pool_size": 65535, 00:23:15.991 "bdev_io_cache_size": 256, 00:23:15.991 "bdev_auto_examine": true, 00:23:15.991 "iobuf_small_cache_size": 128, 00:23:15.991 "iobuf_large_cache_size": 16 00:23:15.991 } 00:23:15.991 }, 00:23:15.991 { 00:23:15.991 "method": "bdev_raid_set_options", 00:23:15.991 "params": { 00:23:15.991 "process_window_size_kb": 1024, 00:23:15.991 "process_max_bandwidth_mb_sec": 0 00:23:15.991 } 00:23:15.991 }, 00:23:15.991 { 00:23:15.991 "method": "bdev_iscsi_set_options", 00:23:15.991 "params": { 00:23:15.991 "timeout_sec": 30 00:23:15.991 } 00:23:15.991 }, 00:23:15.991 { 00:23:15.991 "method": "bdev_nvme_set_options", 00:23:15.991 "params": { 00:23:15.991 "action_on_timeout": "none", 00:23:15.991 "timeout_us": 0, 00:23:15.991 "timeout_admin_us": 0, 00:23:15.991 "keep_alive_timeout_ms": 10000, 00:23:15.991 "arbitration_burst": 0, 00:23:15.991 "low_priority_weight": 0, 00:23:15.991 "medium_priority_weight": 0, 00:23:15.991 "high_priority_weight": 0, 00:23:15.991 "nvme_adminq_poll_period_us": 10000, 00:23:15.991 "nvme_ioq_poll_period_us": 0, 00:23:15.991 "io_queue_requests": 512, 00:23:15.991 "delay_cmd_submit": true, 00:23:15.991 "transport_retry_count": 4, 00:23:15.991 "bdev_retry_count": 3, 00:23:15.991 "transport_ack_timeout": 0, 00:23:15.991 "ctrlr_loss_timeout_sec": 0, 00:23:15.991 "reconnect_delay_sec": 0, 00:23:15.991 "fast_io_fail_timeout_sec": 0, 00:23:15.991 "disable_auto_failback": false, 00:23:15.991 "generate_uuids": false, 00:23:15.991 "transport_tos": 0, 00:23:15.991 "nvme_error_stat": false, 00:23:15.991 "rdma_srq_size": 0, 00:23:15.991 "io_path_stat": false, 00:23:15.991 "allow_accel_sequence": false, 00:23:15.991 "rdma_max_cq_size": 0, 00:23:15.991 "rdma_cm_event_timeout_ms": 0, 00:23:15.991 "dhchap_digests": [ 00:23:15.991 "sha256", 00:23:15.991 "sha384", 00:23:15.991 "sha512" 00:23:15.991 ], 00:23:15.991 "dhchap_dhgroups": [ 00:23:15.991 "null", 00:23:15.991 "ffdhe2048", 00:23:15.991 "ffdhe3072", 00:23:15.991 "ffdhe4096", 00:23:15.991 "ffdhe6144", 00:23:15.991 "ffdhe8192" 00:23:15.991 ] 00:23:15.991 } 00:23:15.991 }, 00:23:15.991 { 00:23:15.991 "method": "bdev_nvme_attach_controller", 00:23:15.991 "params": { 00:23:15.992 "name": "TLSTEST", 00:23:15.992 "trtype": "TCP", 00:23:15.992 "adrfam": "IPv4", 00:23:15.992 "traddr": "10.0.0.2", 00:23:15.992 "trsvcid": "4420", 00:23:15.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.992 "prchk_reftag": false, 00:23:15.992 "prchk_guard": false, 00:23:15.992 "ctrlr_loss_timeout_sec": 0, 00:23:15.992 "reconnect_delay_sec": 0, 00:23:15.992 "fast_io_fail_timeout_sec": 0, 00:23:15.992 "psk": "/tmp/tmp.Ksjwacg6W6", 00:23:15.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.992 "hdgst": false, 00:23:15.992 "ddgst": false 00:23:15.992 } 00:23:15.992 }, 00:23:15.992 { 00:23:15.992 "method": "bdev_nvme_set_hotplug", 00:23:15.992 "params": { 00:23:15.992 "period_us": 100000, 00:23:15.992 "enable": false 00:23:15.992 } 00:23:15.992 }, 00:23:15.992 { 00:23:15.992 "method": "bdev_wait_for_examine" 00:23:15.992 } 00:23:15.992 ] 00:23:15.992 }, 00:23:15.992 { 00:23:15.992 "subsystem": "nbd", 00:23:15.992 "config": [] 00:23:15.992 } 00:23:15.992 ] 00:23:15.992 }' 00:23:15.992 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.992 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.992 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.992 [2024-07-25 05:43:09.566967] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:15.992 [2024-07-25 05:43:09.567044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666033 ] 00:23:15.992 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.992 [2024-07-25 05:43:09.623140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.250 [2024-07-25 05:43:09.707494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.250 [2024-07-25 05:43:09.872533] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.250 [2024-07-25 05:43:09.872720] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:17.206 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.206 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:17.206 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:17.206 Running I/O for 10 seconds... 00:23:27.241 00:23:27.241 Latency(us) 00:23:27.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.241 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:27.241 Verification LBA range: start 0x0 length 0x2000 00:23:27.241 TLSTESTn1 : 10.06 1831.79 7.16 0.00 0.00 69703.70 6116.69 84662.80 00:23:27.241 =================================================================================================================== 00:23:27.241 Total : 1831.79 7.16 0.00 0.00 69703.70 6116.69 84662.80 00:23:27.241 0 00:23:27.241 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.241 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1666033 00:23:27.241 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1666033 ']' 00:23:27.241 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1666033 00:23:27.241 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:27.241 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:27.241 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1666033 00:23:27.241 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:27.241 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:27.241 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1666033' 00:23:27.241 killing process with pid 1666033 00:23:27.241 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1666033 00:23:27.241 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.241 00:23:27.241 Latency(us) 00:23:27.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.241 =================================================================================================================== 00:23:27.241 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.241 [2024-07-25 05:43:20.759676] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:27.241 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1666033 00:23:27.499 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1665878 00:23:27.499 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1665878 ']' 00:23:27.499 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1665878 00:23:27.499 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:27.499 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:27.499 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1665878 00:23:27.499 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:27.499 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:27.499 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1665878' 00:23:27.499 killing process with pid 1665878 00:23:27.499 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1665878 00:23:27.499 [2024-07-25 05:43:21.007407] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:27.499 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1665878 00:23:27.758 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:27.758 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.758 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:27.758 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.758 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1667448 00:23:27.758 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:27.758 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1667448 00:23:27.758 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1667448 ']' 00:23:27.758 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.758 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:27.758 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.758 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:27.758 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.758 [2024-07-25 05:43:21.287440] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:27.758 [2024-07-25 05:43:21.287520] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.758 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.758 [2024-07-25 05:43:21.354169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.758 [2024-07-25 05:43:21.442314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.758 [2024-07-25 05:43:21.442374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.758 [2024-07-25 05:43:21.442403] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.758 [2024-07-25 05:43:21.442415] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.758 [2024-07-25 05:43:21.442425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.758 [2024-07-25 05:43:21.442452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.016 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.016 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:28.016 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.016 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.016 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.016 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.016 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Ksjwacg6W6 00:23:28.016 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ksjwacg6W6 00:23:28.016 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:28.274 [2024-07-25 05:43:21.806859] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.274 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:28.532 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:28.790 [2024-07-25 05:43:22.320273] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:28.790 [2024-07-25 05:43:22.320550] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.790 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:29.048 malloc0 00:23:29.048 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:29.305 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ksjwacg6W6 00:23:29.563 [2024-07-25 05:43:23.150347] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:29.563 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1667637 00:23:29.563 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:29.563 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.563 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1667637 /var/tmp/bdevperf.sock 00:23:29.563 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1667637 ']' 00:23:29.563 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.563 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:29.563 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.563 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:29.563 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.563 [2024-07-25 05:43:23.212179] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:29.563 [2024-07-25 05:43:23.212261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667637 ] 00:23:29.563 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.822 [2024-07-25 05:43:23.272724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.822 [2024-07-25 05:43:23.358213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.822 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:29.822 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:29.822 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ksjwacg6W6 00:23:30.080 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:30.338 [2024-07-25 05:43:23.935919] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.338 nvme0n1 00:23:30.338 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:30.595 Running I/O for 1 seconds... 00:23:31.526 00:23:31.526 Latency(us) 00:23:31.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.526 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:31.526 Verification LBA range: start 0x0 length 0x2000 00:23:31.526 nvme0n1 : 1.03 2993.94 11.70 0.00 0.00 42275.14 6747.78 61749.48 00:23:31.526 =================================================================================================================== 00:23:31.526 Total : 2993.94 11.70 0.00 0.00 42275.14 6747.78 61749.48 00:23:31.526 0 00:23:31.526 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1667637 00:23:31.526 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1667637 ']' 00:23:31.526 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1667637 00:23:31.526 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:31.526 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.526 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1667637 00:23:31.526 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:31.526 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:31.526 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1667637' 00:23:31.526 killing process with pid 1667637 00:23:31.526 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1667637 00:23:31.526 Received shutdown signal, test time was about 1.000000 seconds 00:23:31.526 00:23:31.526 Latency(us) 00:23:31.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.526 =================================================================================================================== 00:23:31.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.526 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1667637 00:23:31.784 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1667448 00:23:31.784 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1667448 ']' 00:23:31.784 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1667448 00:23:31.784 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:31.784 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.784 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1667448 00:23:31.784 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:31.784 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:31.784 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1667448' 00:23:31.784 killing process with pid 1667448 00:23:31.784 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1667448 00:23:31.784 [2024-07-25 05:43:25.439357] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:31.784 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1667448 00:23:32.042 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:23:32.042 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.042 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.042 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.042 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1667922 00:23:32.042 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1667922 00:23:32.042 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1667922 ']' 00:23:32.042 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.042 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.042 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.042 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.042 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.042 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:32.042 [2024-07-25 05:43:25.717018] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:32.042 [2024-07-25 05:43:25.717122] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.301 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.301 [2024-07-25 05:43:25.782825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.301 [2024-07-25 05:43:25.868429] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.301 [2024-07-25 05:43:25.868492] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.301 [2024-07-25 05:43:25.868521] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.301 [2024-07-25 05:43:25.868533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.301 [2024-07-25 05:43:25.868543] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.301 [2024-07-25 05:43:25.868579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.301 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:32.301 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:32.301 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.301 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:32.301 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.559 [2024-07-25 05:43:26.012378] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.559 malloc0 00:23:32.559 [2024-07-25 05:43:26.044746] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.559 [2024-07-25 05:43:26.052505] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1668057 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1668057 /var/tmp/bdevperf.sock 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1668057 ']' 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.559 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.559 [2024-07-25 05:43:26.118274] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:32.559 [2024-07-25 05:43:26.118349] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668057 ] 00:23:32.559 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.559 [2024-07-25 05:43:26.179089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.817 [2024-07-25 05:43:26.270706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.817 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:32.817 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:32.817 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ksjwacg6W6 00:23:33.075 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:33.333 [2024-07-25 05:43:26.887315] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.333 nvme0n1 00:23:33.333 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:33.591 Running I/O for 1 seconds... 00:23:34.523 00:23:34.523 Latency(us) 00:23:34.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.523 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:34.524 Verification LBA range: start 0x0 length 0x2000 00:23:34.524 nvme0n1 : 1.04 3074.89 12.01 0.00 0.00 40863.51 10971.21 74177.04 00:23:34.524 =================================================================================================================== 00:23:34.524 Total : 3074.89 12.01 0.00 0.00 40863.51 10971.21 74177.04 00:23:34.524 0 00:23:34.524 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:23:34.524 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.524 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.781 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.781 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:23:34.781 "subsystems": [ 00:23:34.781 { 00:23:34.781 "subsystem": "keyring", 00:23:34.781 "config": [ 00:23:34.781 { 00:23:34.781 "method": "keyring_file_add_key", 00:23:34.781 "params": { 00:23:34.781 "name": "key0", 00:23:34.781 "path": "/tmp/tmp.Ksjwacg6W6" 00:23:34.781 } 00:23:34.781 } 00:23:34.781 ] 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "subsystem": "iobuf", 00:23:34.781 "config": [ 00:23:34.781 { 00:23:34.781 "method": "iobuf_set_options", 00:23:34.781 "params": { 00:23:34.781 "small_pool_count": 8192, 00:23:34.781 "large_pool_count": 1024, 00:23:34.781 "small_bufsize": 8192, 00:23:34.781 "large_bufsize": 135168 00:23:34.781 } 00:23:34.781 } 00:23:34.781 ] 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "subsystem": "sock", 00:23:34.781 "config": [ 00:23:34.781 { 00:23:34.781 "method": "sock_set_default_impl", 00:23:34.781 "params": { 00:23:34.781 "impl_name": "posix" 00:23:34.781 } 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "method": "sock_impl_set_options", 00:23:34.781 "params": { 00:23:34.781 "impl_name": "ssl", 00:23:34.781 "recv_buf_size": 4096, 00:23:34.781 "send_buf_size": 4096, 00:23:34.781 "enable_recv_pipe": true, 00:23:34.781 "enable_quickack": false, 00:23:34.781 "enable_placement_id": 0, 00:23:34.781 "enable_zerocopy_send_server": true, 00:23:34.781 "enable_zerocopy_send_client": false, 00:23:34.781 "zerocopy_threshold": 0, 00:23:34.781 "tls_version": 0, 00:23:34.781 "enable_ktls": false 00:23:34.781 } 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "method": "sock_impl_set_options", 00:23:34.781 "params": { 00:23:34.781 "impl_name": "posix", 00:23:34.781 "recv_buf_size": 2097152, 00:23:34.781 "send_buf_size": 2097152, 00:23:34.781 "enable_recv_pipe": true, 00:23:34.781 "enable_quickack": false, 00:23:34.781 "enable_placement_id": 0, 00:23:34.781 "enable_zerocopy_send_server": true, 00:23:34.781 "enable_zerocopy_send_client": false, 00:23:34.781 "zerocopy_threshold": 0, 00:23:34.781 "tls_version": 0, 00:23:34.781 "enable_ktls": false 00:23:34.781 } 00:23:34.781 } 00:23:34.781 ] 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "subsystem": "vmd", 00:23:34.781 "config": [] 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "subsystem": "accel", 00:23:34.781 "config": [ 00:23:34.781 { 00:23:34.781 "method": "accel_set_options", 00:23:34.781 "params": { 00:23:34.781 "small_cache_size": 128, 00:23:34.781 "large_cache_size": 16, 00:23:34.781 "task_count": 2048, 00:23:34.781 "sequence_count": 2048, 00:23:34.781 "buf_count": 2048 00:23:34.781 } 00:23:34.781 } 00:23:34.781 ] 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "subsystem": "bdev", 00:23:34.781 "config": [ 00:23:34.781 { 00:23:34.781 "method": "bdev_set_options", 00:23:34.781 "params": { 00:23:34.781 "bdev_io_pool_size": 65535, 00:23:34.781 "bdev_io_cache_size": 256, 00:23:34.781 "bdev_auto_examine": true, 00:23:34.781 "iobuf_small_cache_size": 128, 00:23:34.781 "iobuf_large_cache_size": 16 00:23:34.781 } 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "method": "bdev_raid_set_options", 00:23:34.781 "params": { 00:23:34.781 "process_window_size_kb": 1024, 00:23:34.781 "process_max_bandwidth_mb_sec": 0 00:23:34.781 } 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "method": "bdev_iscsi_set_options", 00:23:34.781 "params": { 00:23:34.781 "timeout_sec": 30 00:23:34.781 } 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "method": "bdev_nvme_set_options", 00:23:34.781 "params": { 00:23:34.781 "action_on_timeout": "none", 00:23:34.781 "timeout_us": 0, 00:23:34.781 "timeout_admin_us": 0, 00:23:34.781 "keep_alive_timeout_ms": 10000, 00:23:34.781 "arbitration_burst": 0, 00:23:34.781 "low_priority_weight": 0, 00:23:34.781 "medium_priority_weight": 0, 00:23:34.781 "high_priority_weight": 0, 00:23:34.781 "nvme_adminq_poll_period_us": 10000, 00:23:34.781 "nvme_ioq_poll_period_us": 0, 00:23:34.781 "io_queue_requests": 0, 00:23:34.781 "delay_cmd_submit": true, 00:23:34.781 "transport_retry_count": 4, 00:23:34.781 "bdev_retry_count": 3, 00:23:34.781 "transport_ack_timeout": 0, 00:23:34.781 "ctrlr_loss_timeout_sec": 0, 00:23:34.781 "reconnect_delay_sec": 0, 00:23:34.781 "fast_io_fail_timeout_sec": 0, 00:23:34.781 "disable_auto_failback": false, 00:23:34.781 "generate_uuids": false, 00:23:34.781 "transport_tos": 0, 00:23:34.781 "nvme_error_stat": false, 00:23:34.781 "rdma_srq_size": 0, 00:23:34.781 "io_path_stat": false, 00:23:34.781 "allow_accel_sequence": false, 00:23:34.781 "rdma_max_cq_size": 0, 00:23:34.781 "rdma_cm_event_timeout_ms": 0, 00:23:34.781 "dhchap_digests": [ 00:23:34.781 "sha256", 00:23:34.781 "sha384", 00:23:34.781 "sha512" 00:23:34.781 ], 00:23:34.781 "dhchap_dhgroups": [ 00:23:34.781 "null", 00:23:34.781 "ffdhe2048", 00:23:34.781 "ffdhe3072", 00:23:34.781 "ffdhe4096", 00:23:34.781 "ffdhe6144", 00:23:34.781 "ffdhe8192" 00:23:34.781 ] 00:23:34.781 } 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "method": "bdev_nvme_set_hotplug", 00:23:34.781 "params": { 00:23:34.781 "period_us": 100000, 00:23:34.781 "enable": false 00:23:34.781 } 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "method": "bdev_malloc_create", 00:23:34.781 "params": { 00:23:34.781 "name": "malloc0", 00:23:34.781 "num_blocks": 8192, 00:23:34.781 "block_size": 4096, 00:23:34.781 "physical_block_size": 4096, 00:23:34.781 "uuid": "0232b7b8-9f4e-4bff-b50c-b496a14dea33", 00:23:34.781 "optimal_io_boundary": 0, 00:23:34.781 "md_size": 0, 00:23:34.781 "dif_type": 0, 00:23:34.781 "dif_is_head_of_md": false, 00:23:34.781 "dif_pi_format": 0 00:23:34.781 } 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "method": "bdev_wait_for_examine" 00:23:34.781 } 00:23:34.781 ] 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "subsystem": "nbd", 00:23:34.781 "config": [] 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "subsystem": "scheduler", 00:23:34.781 "config": [ 00:23:34.781 { 00:23:34.781 "method": "framework_set_scheduler", 00:23:34.781 "params": { 00:23:34.781 "name": "static" 00:23:34.781 } 00:23:34.781 } 00:23:34.781 ] 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "subsystem": "nvmf", 00:23:34.781 "config": [ 00:23:34.781 { 00:23:34.781 "method": "nvmf_set_config", 00:23:34.781 "params": { 00:23:34.781 "discovery_filter": "match_any", 00:23:34.781 "admin_cmd_passthru": { 00:23:34.781 "identify_ctrlr": false 00:23:34.781 } 00:23:34.781 } 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "method": "nvmf_set_max_subsystems", 00:23:34.781 "params": { 00:23:34.781 "max_subsystems": 1024 00:23:34.781 } 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "method": "nvmf_set_crdt", 00:23:34.781 "params": { 00:23:34.781 "crdt1": 0, 00:23:34.781 "crdt2": 0, 00:23:34.781 "crdt3": 0 00:23:34.781 } 00:23:34.781 }, 00:23:34.781 { 00:23:34.781 "method": "nvmf_create_transport", 00:23:34.781 "params": { 00:23:34.781 "trtype": "TCP", 00:23:34.781 "max_queue_depth": 128, 00:23:34.781 "max_io_qpairs_per_ctrlr": 127, 00:23:34.781 "in_capsule_data_size": 4096, 00:23:34.781 "max_io_size": 131072, 00:23:34.781 "io_unit_size": 131072, 00:23:34.781 "max_aq_depth": 128, 00:23:34.781 "num_shared_buffers": 511, 00:23:34.781 "buf_cache_size": 4294967295, 00:23:34.781 "dif_insert_or_strip": false, 00:23:34.781 "zcopy": false, 00:23:34.781 "c2h_success": false, 00:23:34.781 "sock_priority": 0, 00:23:34.781 "abort_timeout_sec": 1, 00:23:34.781 "ack_timeout": 0, 00:23:34.781 "data_wr_pool_size": 0 00:23:34.781 } 00:23:34.781 }, 00:23:34.782 { 00:23:34.782 "method": "nvmf_create_subsystem", 00:23:34.782 "params": { 00:23:34.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.782 "allow_any_host": false, 00:23:34.782 "serial_number": "00000000000000000000", 00:23:34.782 "model_number": "SPDK bdev Controller", 00:23:34.782 "max_namespaces": 32, 00:23:34.782 "min_cntlid": 1, 00:23:34.782 "max_cntlid": 65519, 00:23:34.782 "ana_reporting": false 00:23:34.782 } 00:23:34.782 }, 00:23:34.782 { 00:23:34.782 "method": "nvmf_subsystem_add_host", 00:23:34.782 "params": { 00:23:34.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.782 "host": "nqn.2016-06.io.spdk:host1", 00:23:34.782 "psk": "key0" 00:23:34.782 } 00:23:34.782 }, 00:23:34.782 { 00:23:34.782 "method": "nvmf_subsystem_add_ns", 00:23:34.782 "params": { 00:23:34.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.782 "namespace": { 00:23:34.782 "nsid": 1, 00:23:34.782 "bdev_name": "malloc0", 00:23:34.782 "nguid": "0232B7B89F4E4BFFB50CB496A14DEA33", 00:23:34.782 "uuid": "0232b7b8-9f4e-4bff-b50c-b496a14dea33", 00:23:34.782 "no_auto_visible": false 00:23:34.782 } 00:23:34.782 } 00:23:34.782 }, 00:23:34.782 { 00:23:34.782 "method": "nvmf_subsystem_add_listener", 00:23:34.782 "params": { 00:23:34.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.782 "listen_address": { 00:23:34.782 "trtype": "TCP", 00:23:34.782 "adrfam": "IPv4", 00:23:34.782 "traddr": "10.0.0.2", 00:23:34.782 "trsvcid": "4420" 00:23:34.782 }, 00:23:34.782 "secure_channel": false, 00:23:34.782 "sock_impl": "ssl" 00:23:34.782 } 00:23:34.782 } 00:23:34.782 ] 00:23:34.782 } 00:23:34.782 ] 00:23:34.782 }' 00:23:34.782 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:35.040 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:23:35.040 "subsystems": [ 00:23:35.040 { 00:23:35.040 "subsystem": "keyring", 00:23:35.040 "config": [ 00:23:35.040 { 00:23:35.040 "method": "keyring_file_add_key", 00:23:35.040 "params": { 00:23:35.040 "name": "key0", 00:23:35.040 "path": "/tmp/tmp.Ksjwacg6W6" 00:23:35.040 } 00:23:35.040 } 00:23:35.040 ] 00:23:35.040 }, 00:23:35.040 { 00:23:35.040 "subsystem": "iobuf", 00:23:35.040 "config": [ 00:23:35.040 { 00:23:35.040 "method": "iobuf_set_options", 00:23:35.040 "params": { 00:23:35.040 "small_pool_count": 8192, 00:23:35.040 "large_pool_count": 1024, 00:23:35.040 "small_bufsize": 8192, 00:23:35.040 "large_bufsize": 135168 00:23:35.040 } 00:23:35.040 } 00:23:35.040 ] 00:23:35.040 }, 00:23:35.040 { 00:23:35.040 "subsystem": "sock", 00:23:35.040 "config": [ 00:23:35.040 { 00:23:35.040 "method": "sock_set_default_impl", 00:23:35.040 "params": { 00:23:35.040 "impl_name": "posix" 00:23:35.040 } 00:23:35.040 }, 00:23:35.040 { 00:23:35.040 "method": "sock_impl_set_options", 00:23:35.040 "params": { 00:23:35.040 "impl_name": "ssl", 00:23:35.040 "recv_buf_size": 4096, 00:23:35.040 "send_buf_size": 4096, 00:23:35.040 "enable_recv_pipe": true, 00:23:35.040 "enable_quickack": false, 00:23:35.040 "enable_placement_id": 0, 00:23:35.040 "enable_zerocopy_send_server": true, 00:23:35.040 "enable_zerocopy_send_client": false, 00:23:35.040 "zerocopy_threshold": 0, 00:23:35.040 "tls_version": 0, 00:23:35.040 "enable_ktls": false 00:23:35.040 } 00:23:35.040 }, 00:23:35.040 { 00:23:35.040 "method": "sock_impl_set_options", 00:23:35.040 "params": { 00:23:35.040 "impl_name": "posix", 00:23:35.040 "recv_buf_size": 2097152, 00:23:35.040 "send_buf_size": 2097152, 00:23:35.040 "enable_recv_pipe": true, 00:23:35.040 "enable_quickack": false, 00:23:35.040 "enable_placement_id": 0, 00:23:35.040 "enable_zerocopy_send_server": true, 00:23:35.040 "enable_zerocopy_send_client": false, 00:23:35.040 "zerocopy_threshold": 0, 00:23:35.040 "tls_version": 0, 00:23:35.040 "enable_ktls": false 00:23:35.040 } 00:23:35.040 } 00:23:35.040 ] 00:23:35.040 }, 00:23:35.040 { 00:23:35.040 "subsystem": "vmd", 00:23:35.040 "config": [] 00:23:35.040 }, 00:23:35.040 { 00:23:35.040 "subsystem": "accel", 00:23:35.040 "config": [ 00:23:35.040 { 00:23:35.040 "method": "accel_set_options", 00:23:35.040 "params": { 00:23:35.040 "small_cache_size": 128, 00:23:35.040 "large_cache_size": 16, 00:23:35.040 "task_count": 2048, 00:23:35.040 "sequence_count": 2048, 00:23:35.040 "buf_count": 2048 00:23:35.040 } 00:23:35.040 } 00:23:35.040 ] 00:23:35.040 }, 00:23:35.040 { 00:23:35.040 "subsystem": "bdev", 00:23:35.040 "config": [ 00:23:35.040 { 00:23:35.040 "method": "bdev_set_options", 00:23:35.040 "params": { 00:23:35.040 "bdev_io_pool_size": 65535, 00:23:35.040 "bdev_io_cache_size": 256, 00:23:35.040 "bdev_auto_examine": true, 00:23:35.040 "iobuf_small_cache_size": 128, 00:23:35.040 "iobuf_large_cache_size": 16 00:23:35.040 } 00:23:35.040 }, 00:23:35.040 { 00:23:35.040 "method": "bdev_raid_set_options", 00:23:35.040 "params": { 00:23:35.040 "process_window_size_kb": 1024, 00:23:35.040 "process_max_bandwidth_mb_sec": 0 00:23:35.040 } 00:23:35.040 }, 00:23:35.040 { 00:23:35.040 "method": "bdev_iscsi_set_options", 00:23:35.040 "params": { 00:23:35.040 "timeout_sec": 30 00:23:35.040 } 00:23:35.040 }, 00:23:35.040 { 00:23:35.040 "method": "bdev_nvme_set_options", 00:23:35.040 "params": { 00:23:35.040 "action_on_timeout": "none", 00:23:35.040 "timeout_us": 0, 00:23:35.040 "timeout_admin_us": 0, 00:23:35.040 "keep_alive_timeout_ms": 10000, 00:23:35.040 "arbitration_burst": 0, 00:23:35.040 "low_priority_weight": 0, 00:23:35.040 "medium_priority_weight": 0, 00:23:35.040 "high_priority_weight": 0, 00:23:35.040 "nvme_adminq_poll_period_us": 10000, 00:23:35.040 "nvme_ioq_poll_period_us": 0, 00:23:35.040 "io_queue_requests": 512, 00:23:35.040 "delay_cmd_submit": true, 00:23:35.040 "transport_retry_count": 4, 00:23:35.040 "bdev_retry_count": 3, 00:23:35.040 "transport_ack_timeout": 0, 00:23:35.040 "ctrlr_loss_timeout_sec": 0, 00:23:35.040 "reconnect_delay_sec": 0, 00:23:35.040 "fast_io_fail_timeout_sec": 0, 00:23:35.040 "disable_auto_failback": false, 00:23:35.040 "generate_uuids": false, 00:23:35.040 "transport_tos": 0, 00:23:35.040 "nvme_error_stat": false, 00:23:35.040 "rdma_srq_size": 0, 00:23:35.040 "io_path_stat": false, 00:23:35.040 "allow_accel_sequence": false, 00:23:35.040 "rdma_max_cq_size": 0, 00:23:35.040 "rdma_cm_event_timeout_ms": 0, 00:23:35.040 "dhchap_digests": [ 00:23:35.040 "sha256", 00:23:35.040 "sha384", 00:23:35.040 "sha512" 00:23:35.040 ], 00:23:35.040 "dhchap_dhgroups": [ 00:23:35.040 "null", 00:23:35.040 "ffdhe2048", 00:23:35.040 "ffdhe3072", 00:23:35.040 "ffdhe4096", 00:23:35.040 "ffdhe6144", 00:23:35.040 "ffdhe8192" 00:23:35.040 ] 00:23:35.040 } 00:23:35.040 }, 00:23:35.040 { 00:23:35.040 "method": "bdev_nvme_attach_controller", 00:23:35.040 "params": { 00:23:35.040 "name": "nvme0", 00:23:35.040 "trtype": "TCP", 00:23:35.040 "adrfam": "IPv4", 00:23:35.040 "traddr": "10.0.0.2", 00:23:35.040 "trsvcid": "4420", 00:23:35.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.041 "prchk_reftag": false, 00:23:35.041 "prchk_guard": false, 00:23:35.041 "ctrlr_loss_timeout_sec": 0, 00:23:35.041 "reconnect_delay_sec": 0, 00:23:35.041 "fast_io_fail_timeout_sec": 0, 00:23:35.041 "psk": "key0", 00:23:35.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.041 "hdgst": false, 00:23:35.041 "ddgst": false 00:23:35.041 } 00:23:35.041 }, 00:23:35.041 { 00:23:35.041 "method": "bdev_nvme_set_hotplug", 00:23:35.041 "params": { 00:23:35.041 "period_us": 100000, 00:23:35.041 "enable": false 00:23:35.041 } 00:23:35.041 }, 00:23:35.041 { 00:23:35.041 "method": "bdev_enable_histogram", 00:23:35.041 "params": { 00:23:35.041 "name": "nvme0n1", 00:23:35.041 "enable": true 00:23:35.041 } 00:23:35.041 }, 00:23:35.041 { 00:23:35.041 "method": "bdev_wait_for_examine" 00:23:35.041 } 00:23:35.041 ] 00:23:35.041 }, 00:23:35.041 { 00:23:35.041 "subsystem": "nbd", 00:23:35.041 "config": [] 00:23:35.041 } 00:23:35.041 ] 00:23:35.041 }' 00:23:35.041 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1668057 00:23:35.041 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1668057 ']' 00:23:35.041 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1668057 00:23:35.041 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:35.041 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:35.041 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1668057 00:23:35.041 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:35.041 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:35.041 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1668057' 00:23:35.041 killing process with pid 1668057 00:23:35.041 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1668057 00:23:35.041 Received shutdown signal, test time was about 1.000000 seconds 00:23:35.041 00:23:35.041 Latency(us) 00:23:35.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.041 =================================================================================================================== 00:23:35.041 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.041 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1668057 00:23:35.298 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1667922 00:23:35.298 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1667922 ']' 00:23:35.298 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1667922 00:23:35.298 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:35.298 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:35.298 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1667922 00:23:35.298 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:35.298 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:35.298 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1667922' 00:23:35.298 killing process with pid 1667922 00:23:35.298 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1667922 00:23:35.298 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1667922 00:23:35.589 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:23:35.589 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.589 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:23:35.589 "subsystems": [ 00:23:35.589 { 00:23:35.589 "subsystem": "keyring", 00:23:35.589 "config": [ 00:23:35.589 { 00:23:35.589 "method": "keyring_file_add_key", 00:23:35.589 "params": { 00:23:35.589 "name": "key0", 00:23:35.589 "path": "/tmp/tmp.Ksjwacg6W6" 00:23:35.589 } 00:23:35.589 } 00:23:35.589 ] 00:23:35.589 }, 00:23:35.589 { 00:23:35.589 "subsystem": "iobuf", 00:23:35.589 "config": [ 00:23:35.589 { 00:23:35.589 "method": "iobuf_set_options", 00:23:35.589 "params": { 00:23:35.589 "small_pool_count": 8192, 00:23:35.589 "large_pool_count": 1024, 00:23:35.589 "small_bufsize": 8192, 00:23:35.589 "large_bufsize": 135168 00:23:35.589 } 00:23:35.589 } 00:23:35.589 ] 00:23:35.589 }, 00:23:35.589 { 00:23:35.589 "subsystem": "sock", 00:23:35.589 "config": [ 00:23:35.589 { 00:23:35.589 "method": "sock_set_default_impl", 00:23:35.589 "params": { 00:23:35.589 "impl_name": "posix" 00:23:35.589 } 00:23:35.589 }, 00:23:35.589 { 00:23:35.589 "method": "sock_impl_set_options", 00:23:35.589 "params": { 00:23:35.589 "impl_name": "ssl", 00:23:35.589 "recv_buf_size": 4096, 00:23:35.589 "send_buf_size": 4096, 00:23:35.589 "enable_recv_pipe": true, 00:23:35.589 "enable_quickack": false, 00:23:35.589 "enable_placement_id": 0, 00:23:35.589 "enable_zerocopy_send_server": true, 00:23:35.589 "enable_zerocopy_send_client": false, 00:23:35.589 "zerocopy_threshold": 0, 00:23:35.589 "tls_version": 0, 00:23:35.589 "enable_ktls": false 00:23:35.589 } 00:23:35.589 }, 00:23:35.589 { 00:23:35.589 "method": "sock_impl_set_options", 00:23:35.589 "params": { 00:23:35.589 "impl_name": "posix", 00:23:35.589 "recv_buf_size": 2097152, 00:23:35.589 "send_buf_size": 2097152, 00:23:35.589 "enable_recv_pipe": true, 00:23:35.589 "enable_quickack": false, 00:23:35.589 "enable_placement_id": 0, 00:23:35.589 "enable_zerocopy_send_server": true, 00:23:35.589 "enable_zerocopy_send_client": false, 00:23:35.589 "zerocopy_threshold": 0, 00:23:35.589 "tls_version": 0, 00:23:35.589 "enable_ktls": false 00:23:35.589 } 00:23:35.589 } 00:23:35.589 ] 00:23:35.589 }, 00:23:35.589 { 00:23:35.589 "subsystem": "vmd", 00:23:35.589 "config": [] 00:23:35.589 }, 00:23:35.589 { 00:23:35.589 "subsystem": "accel", 00:23:35.589 "config": [ 00:23:35.589 { 00:23:35.589 "method": "accel_set_options", 00:23:35.589 "params": { 00:23:35.589 "small_cache_size": 128, 00:23:35.589 "large_cache_size": 16, 00:23:35.589 "task_count": 2048, 00:23:35.589 "sequence_count": 2048, 00:23:35.589 "buf_count": 2048 00:23:35.589 } 00:23:35.589 } 00:23:35.589 ] 00:23:35.589 }, 00:23:35.590 { 00:23:35.590 "subsystem": "bdev", 00:23:35.590 "config": [ 00:23:35.590 { 00:23:35.590 "method": "bdev_set_options", 00:23:35.590 "params": { 00:23:35.590 "bdev_io_pool_size": 65535, 00:23:35.590 "bdev_io_cache_size": 256, 00:23:35.590 "bdev_auto_examine": true, 00:23:35.590 "iobuf_small_cache_size": 128, 00:23:35.590 "iobuf_large_cache_size": 16 00:23:35.590 } 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "method": "bdev_raid_set_options", 00:23:35.590 "params": { 00:23:35.590 "process_window_size_kb": 1024, 00:23:35.590 "process_max_bandwidth_mb_sec": 0 00:23:35.590 } 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "method": "bdev_iscsi_set_options", 00:23:35.590 "params": { 00:23:35.590 "timeout_sec": 30 00:23:35.590 } 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "method": "bdev_nvme_set_options", 00:23:35.590 "params": { 00:23:35.590 "action_on_timeout": "none", 00:23:35.590 "timeout_us": 0, 00:23:35.590 "timeout_admin_us": 0, 00:23:35.590 "keep_alive_timeout_ms": 10000, 00:23:35.590 "arbitration_burst": 0, 00:23:35.590 "low_priority_weight": 0, 00:23:35.590 "medium_priority_weight": 0, 00:23:35.590 "high_priority_weight": 0, 00:23:35.590 "nvme_adminq_poll_period_us": 10000, 00:23:35.590 "nvme_ioq_poll_period_us": 0, 00:23:35.590 "io_queue_requests": 0, 00:23:35.590 "delay_cmd_submit": true, 00:23:35.590 "transport_retry_count": 4, 00:23:35.590 "bdev_retry_count": 3, 00:23:35.590 "transport_ack_timeout": 0, 00:23:35.590 "ctrlr_loss_timeout_sec": 0, 00:23:35.590 "reconnect_delay_sec": 0, 00:23:35.590 "fast_io_fail_timeout_sec": 0, 00:23:35.590 "disable_auto_failback": false, 00:23:35.590 "generate_uuids": false, 00:23:35.590 "transport_tos": 0, 00:23:35.590 "nvme_error_stat": false, 00:23:35.590 "rdma_srq_size": 0, 00:23:35.590 "io_path_stat": false, 00:23:35.590 "allow_accel_sequence": false, 00:23:35.590 "rdma_max_cq_size": 0, 00:23:35.590 "rdma_cm_event_timeout_ms": 0, 00:23:35.590 "dhchap_digests": [ 00:23:35.590 "sha256", 00:23:35.590 "sha384", 00:23:35.590 "sha512" 00:23:35.590 ], 00:23:35.590 "dhchap_dhgroups": [ 00:23:35.590 "null", 00:23:35.590 "ffdhe2048", 00:23:35.590 "ffdhe3072", 00:23:35.590 "ffdhe4096", 00:23:35.590 "ffdhe6144", 00:23:35.590 "ffdhe8192" 00:23:35.590 ] 00:23:35.590 } 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "method": "bdev_nvme_set_hotplug", 00:23:35.590 "params": { 00:23:35.590 "period_us": 100000, 00:23:35.590 "enable": false 00:23:35.590 } 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "method": "bdev_malloc_create", 00:23:35.590 "params": { 00:23:35.590 "name": "malloc0", 00:23:35.590 "num_blocks": 8192, 00:23:35.590 "block_size": 4096, 00:23:35.590 "physical_block_size": 4096, 00:23:35.590 "uuid": "0232b7b8-9f4e-4bff-b50c-b496a14dea33", 00:23:35.590 "optimal_io_boundary": 0, 00:23:35.590 "md_size": 0, 00:23:35.590 "dif_type": 0, 00:23:35.590 "dif_is_head_of_md": false, 00:23:35.590 "dif_pi_format": 0 00:23:35.590 } 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "method": "bdev_wait_for_examine" 00:23:35.590 } 00:23:35.590 ] 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "subsystem": "nbd", 00:23:35.590 "config": [] 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "subsystem": "scheduler", 00:23:35.590 "config": [ 00:23:35.590 { 00:23:35.590 "method": "framework_set_scheduler", 00:23:35.590 "params": { 00:23:35.590 "name": "static" 00:23:35.590 } 00:23:35.590 } 00:23:35.590 ] 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "subsystem": "nvmf", 00:23:35.590 "config": [ 00:23:35.590 { 00:23:35.590 "method": "nvmf_set_config", 00:23:35.590 "params": { 00:23:35.590 "discovery_filter": "match_any", 00:23:35.590 "admin_cmd_passthru": { 00:23:35.590 "identify_ctrlr": false 00:23:35.590 } 00:23:35.590 } 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "method": "nvmf_set_max_subsystems", 00:23:35.590 "params": { 00:23:35.590 "max_subsystems": 1024 00:23:35.590 } 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "method": "nvmf_set_crdt", 00:23:35.590 "params": { 00:23:35.590 "crdt1": 0, 00:23:35.590 "crdt2": 0, 00:23:35.590 "crdt3": 0 00:23:35.590 } 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "method": "nvmf_create_transport", 00:23:35.590 "params": { 00:23:35.590 "trtype": "TCP", 00:23:35.590 "max_queue_depth": 128, 00:23:35.590 "max_io_qpairs_per_ctrlr": 127, 00:23:35.590 "in_capsule_data_size": 4096, 00:23:35.590 "max_io_size": 131072, 00:23:35.590 "io_unit_size": 131072, 00:23:35.590 "max_aq_depth": 128, 00:23:35.590 "num_shared_buffers": 511, 00:23:35.590 "buf_cache_size": 4294967295, 00:23:35.590 "dif_insert_or_strip": false, 00:23:35.590 "zcopy": false, 00:23:35.590 "c2h_success": false, 00:23:35.590 "sock_priority": 0, 00:23:35.590 "abort_timeout_sec": 1, 00:23:35.590 "ack_timeout": 0, 00:23:35.590 "data_wr_pool_size": 0 00:23:35.590 } 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "method": "nvmf_create_subsystem", 00:23:35.590 "params": { 00:23:35.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.590 "allow_any_host": false, 00:23:35.590 "serial_number": "00000000000000000000", 00:23:35.590 "model_number": "SPDK bdev Controller", 00:23:35.590 "max_namespaces": 32, 00:23:35.590 "min_cntlid": 1, 00:23:35.590 "max_cntlid": 65519, 00:23:35.590 "ana_reporting": false 00:23:35.590 } 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "method": "nvmf_subsystem_add_host", 00:23:35.590 "params": { 00:23:35.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.590 "host": "nqn.2016-06.io.spdk:host1", 00:23:35.590 "psk": "key0" 00:23:35.590 } 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "method": "nvmf_subsystem_add_ns", 00:23:35.590 "params": { 00:23:35.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.590 "namespace": { 00:23:35.590 "nsid": 1, 00:23:35.590 "bdev_name": "malloc0", 00:23:35.590 "nguid": "0232B7B89F4E4BFFB50CB496A14DEA33", 00:23:35.590 "uuid": "0232b7b8-9f4e-4bff-b50c-b496a14dea33", 00:23:35.590 "no_auto_visible": false 00:23:35.590 } 00:23:35.590 } 00:23:35.590 }, 00:23:35.590 { 00:23:35.590 "method": "nvmf_subsystem_add_listener", 00:23:35.590 "params": { 00:23:35.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.590 "listen_address": { 00:23:35.590 "trtype": "TCP", 00:23:35.590 "adrfam": "IPv4", 00:23:35.590 "traddr": "10.0.0.2", 00:23:35.590 "trsvcid": "4420" 00:23:35.590 }, 00:23:35.590 "secure_channel": false, 00:23:35.590 "sock_impl": "ssl" 00:23:35.590 } 00:23:35.590 } 00:23:35.590 ] 00:23:35.590 } 00:23:35.590 ] 00:23:35.590 }' 00:23:35.590 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:35.590 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.590 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1668365 00:23:35.590 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:35.590 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1668365 00:23:35.590 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1668365 ']' 00:23:35.590 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.590 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:35.590 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.590 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:35.590 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.590 [2024-07-25 05:43:29.168090] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:35.590 [2024-07-25 05:43:29.168182] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.590 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.590 [2024-07-25 05:43:29.233863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.849 [2024-07-25 05:43:29.320770] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.849 [2024-07-25 05:43:29.320821] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.849 [2024-07-25 05:43:29.320849] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.849 [2024-07-25 05:43:29.320868] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.849 [2024-07-25 05:43:29.320878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.849 [2024-07-25 05:43:29.320946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.107 [2024-07-25 05:43:29.565200] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.107 [2024-07-25 05:43:29.611046] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.107 [2024-07-25 05:43:29.611333] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1668503 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1668503 /var/tmp/bdevperf.sock 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1668503 ']' 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.676 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:23:36.676 "subsystems": [ 00:23:36.676 { 00:23:36.676 "subsystem": "keyring", 00:23:36.676 "config": [ 00:23:36.676 { 00:23:36.676 "method": "keyring_file_add_key", 00:23:36.676 "params": { 00:23:36.676 "name": "key0", 00:23:36.676 "path": "/tmp/tmp.Ksjwacg6W6" 00:23:36.676 } 00:23:36.676 } 00:23:36.676 ] 00:23:36.676 }, 00:23:36.676 { 00:23:36.676 "subsystem": "iobuf", 00:23:36.676 "config": [ 00:23:36.676 { 00:23:36.676 "method": "iobuf_set_options", 00:23:36.676 "params": { 00:23:36.676 "small_pool_count": 8192, 00:23:36.676 "large_pool_count": 1024, 00:23:36.676 "small_bufsize": 8192, 00:23:36.676 "large_bufsize": 135168 00:23:36.676 } 00:23:36.676 } 00:23:36.676 ] 00:23:36.676 }, 00:23:36.676 { 00:23:36.676 "subsystem": "sock", 00:23:36.676 "config": [ 00:23:36.676 { 00:23:36.676 "method": "sock_set_default_impl", 00:23:36.676 "params": { 00:23:36.676 "impl_name": "posix" 00:23:36.676 } 00:23:36.676 }, 00:23:36.676 { 00:23:36.676 "method": "sock_impl_set_options", 00:23:36.676 "params": { 00:23:36.676 "impl_name": "ssl", 00:23:36.676 "recv_buf_size": 4096, 00:23:36.676 "send_buf_size": 4096, 00:23:36.676 "enable_recv_pipe": true, 00:23:36.676 "enable_quickack": false, 00:23:36.676 "enable_placement_id": 0, 00:23:36.676 "enable_zerocopy_send_server": true, 00:23:36.676 "enable_zerocopy_send_client": false, 00:23:36.676 "zerocopy_threshold": 0, 00:23:36.676 "tls_version": 0, 00:23:36.676 "enable_ktls": false 00:23:36.676 } 00:23:36.676 }, 00:23:36.676 { 00:23:36.676 "method": "sock_impl_set_options", 00:23:36.676 "params": { 00:23:36.676 "impl_name": "posix", 00:23:36.676 "recv_buf_size": 2097152, 00:23:36.676 "send_buf_size": 2097152, 00:23:36.676 "enable_recv_pipe": true, 00:23:36.676 "enable_quickack": false, 00:23:36.676 "enable_placement_id": 0, 00:23:36.676 "enable_zerocopy_send_server": true, 00:23:36.676 "enable_zerocopy_send_client": false, 00:23:36.676 "zerocopy_threshold": 0, 00:23:36.676 "tls_version": 0, 00:23:36.676 "enable_ktls": false 00:23:36.676 } 00:23:36.676 } 00:23:36.676 ] 00:23:36.676 }, 00:23:36.676 { 00:23:36.676 "subsystem": "vmd", 00:23:36.676 "config": [] 00:23:36.676 }, 00:23:36.676 { 00:23:36.676 "subsystem": "accel", 00:23:36.676 "config": [ 00:23:36.676 { 00:23:36.676 "method": "accel_set_options", 00:23:36.676 "params": { 00:23:36.676 "small_cache_size": 128, 00:23:36.676 "large_cache_size": 16, 00:23:36.676 "task_count": 2048, 00:23:36.676 "sequence_count": 2048, 00:23:36.676 "buf_count": 2048 00:23:36.676 } 00:23:36.676 } 00:23:36.676 ] 00:23:36.676 }, 00:23:36.676 { 00:23:36.676 "subsystem": "bdev", 00:23:36.676 "config": [ 00:23:36.676 { 00:23:36.676 "method": "bdev_set_options", 00:23:36.676 "params": { 00:23:36.676 "bdev_io_pool_size": 65535, 00:23:36.676 "bdev_io_cache_size": 256, 00:23:36.676 "bdev_auto_examine": true, 00:23:36.676 "iobuf_small_cache_size": 128, 00:23:36.696 "iobuf_large_cache_size": 16 00:23:36.696 } 00:23:36.696 }, 00:23:36.696 { 00:23:36.696 "method": "bdev_raid_set_options", 00:23:36.696 "params": { 00:23:36.696 "process_window_size_kb": 1024, 00:23:36.696 "process_max_bandwidth_mb_sec": 0 00:23:36.696 } 00:23:36.696 }, 00:23:36.696 { 00:23:36.696 "method": "bdev_iscsi_set_options", 00:23:36.696 "params": { 00:23:36.696 "timeout_sec": 30 00:23:36.696 } 00:23:36.696 }, 00:23:36.696 { 00:23:36.696 "method": "bdev_nvme_set_options", 00:23:36.696 "params": { 00:23:36.696 "action_on_timeout": "none", 00:23:36.696 "timeout_us": 0, 00:23:36.696 "timeout_admin_us": 0, 00:23:36.696 "keep_alive_timeout_ms": 10000, 00:23:36.696 "arbitration_burst": 0, 00:23:36.697 "low_priority_weight": 0, 00:23:36.697 "medium_priority_weight": 0, 00:23:36.697 "high_priority_weight": 0, 00:23:36.697 "nvme_adminq_poll_period_us": 10000, 00:23:36.697 "nvme_ioq_poll_period_us": 0, 00:23:36.697 "io_queue_requests": 512, 00:23:36.697 "delay_cmd_submit": true, 00:23:36.697 "transport_retry_count": 4, 00:23:36.697 "bdev_retry_count": 3, 00:23:36.697 "transport_ack_timeout": 0, 00:23:36.697 "ctrlr_loss_timeout_sec": 0, 00:23:36.697 "reconnect_delay_sec": 0, 00:23:36.697 "fast_io_fail_timeout_sec": 0, 00:23:36.697 "disable_auto_failback": false, 00:23:36.697 "generate_uuids": false, 00:23:36.697 "transport_tos": 0, 00:23:36.697 "nvme_error_stat": false, 00:23:36.697 "rdma_srq_size": 0, 00:23:36.697 "io_path_stat": false, 00:23:36.697 "allow_accel_sequence": false, 00:23:36.697 "rdma_max_cq_size": 0, 00:23:36.697 "rdma_cm_event_timeout_ms": 0, 00:23:36.697 "dhchap_digests": [ 00:23:36.697 "sha256", 00:23:36.697 "sha384", 00:23:36.697 "sha512" 00:23:36.697 ], 00:23:36.697 "dhchap_dhgroups": [ 00:23:36.697 "null", 00:23:36.697 "ffdhe2048", 00:23:36.697 "ffdhe3072", 00:23:36.697 "ffdhe4096", 00:23:36.697 "ffdhe6144", 00:23:36.697 "ffdhe8192" 00:23:36.697 ] 00:23:36.697 } 00:23:36.697 }, 00:23:36.697 { 00:23:36.697 "method": "bdev_nvme_attach_controller", 00:23:36.697 "params": { 00:23:36.697 "name": "nvme0", 00:23:36.697 "trtype": "TCP", 00:23:36.697 "adrfam": "IPv4", 00:23:36.697 "traddr": "10.0.0.2", 00:23:36.697 "trsvcid": "4420", 00:23:36.697 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.697 "prchk_reftag": false, 00:23:36.697 "prchk_guard": false, 00:23:36.697 "ctrlr_loss_timeout_sec": 0, 00:23:36.697 "reconnect_delay_sec": 0, 00:23:36.697 "fast_io_fail_timeout_sec": 0, 00:23:36.697 "psk": "key0", 00:23:36.697 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.697 "hdgst": false, 00:23:36.697 "ddgst": false 00:23:36.697 } 00:23:36.697 }, 00:23:36.697 { 00:23:36.697 "method": "bdev_nvme_set_hotplug", 00:23:36.697 "params": { 00:23:36.697 "period_us": 100000, 00:23:36.697 "enable": false 00:23:36.697 } 00:23:36.697 }, 00:23:36.697 { 00:23:36.697 "method": "bdev_enable_histogram", 00:23:36.697 "params": { 00:23:36.697 "name": "nvme0n1", 00:23:36.697 "enable": true 00:23:36.697 } 00:23:36.697 }, 00:23:36.697 { 00:23:36.697 "method": "bdev_wait_for_examine" 00:23:36.697 } 00:23:36.697 ] 00:23:36.697 }, 00:23:36.697 { 00:23:36.697 "subsystem": "nbd", 00:23:36.697 "config": [] 00:23:36.697 } 00:23:36.697 ] 00:23:36.697 }' 00:23:36.697 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:36.697 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.697 [2024-07-25 05:43:30.203023] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:36.697 [2024-07-25 05:43:30.203104] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668503 ] 00:23:36.697 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.697 [2024-07-25 05:43:30.266898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.697 [2024-07-25 05:43:30.359285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.956 [2024-07-25 05:43:30.532061] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:37.520 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.520 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:37.520 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:37.520 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:23:37.778 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.778 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.036 Running I/O for 1 seconds... 00:23:38.970 00:23:38.970 Latency(us) 00:23:38.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.970 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:38.970 Verification LBA range: start 0x0 length 0x2000 00:23:38.970 nvme0n1 : 1.05 2568.31 10.03 0.00 0.00 48752.90 6505.05 80390.83 00:23:38.970 =================================================================================================================== 00:23:38.970 Total : 2568.31 10.03 0.00 0.00 48752.90 6505.05 80390.83 00:23:38.970 0 00:23:38.970 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:23:38.970 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:23:38.970 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:38.970 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:23:38.970 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:23:38.970 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:38.970 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:38.970 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:38.970 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:38.970 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:38.970 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:38.970 nvmf_trace.0 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1668503 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1668503 ']' 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1668503 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1668503 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1668503' 00:23:39.228 killing process with pid 1668503 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1668503 00:23:39.228 Received shutdown signal, test time was about 1.000000 seconds 00:23:39.228 00:23:39.228 Latency(us) 00:23:39.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.228 =================================================================================================================== 00:23:39.228 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1668503 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:39.228 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:39.486 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:39.486 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:39.486 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:39.486 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:39.486 rmmod nvme_tcp 00:23:39.486 rmmod nvme_fabrics 00:23:39.486 rmmod nvme_keyring 00:23:39.486 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:39.486 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:39.486 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:39.486 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1668365 ']' 00:23:39.486 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1668365 00:23:39.486 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1668365 ']' 00:23:39.486 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1668365 00:23:39.487 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:39.487 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:39.487 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1668365 00:23:39.487 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:39.487 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:39.487 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1668365' 00:23:39.487 killing process with pid 1668365 00:23:39.487 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1668365 00:23:39.487 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1668365 00:23:39.745 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:39.745 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:39.745 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:39.745 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:39.745 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:39.745 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.745 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.745 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.641 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:41.641 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.IAjn8Mibgg /tmp/tmp.f3ZpYZR2Ni /tmp/tmp.Ksjwacg6W6 00:23:41.641 00:23:41.641 real 1m18.629s 00:23:41.641 user 2m6.216s 00:23:41.641 sys 0m27.019s 00:23:41.641 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:41.641 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.641 ************************************ 00:23:41.641 END TEST nvmf_tls 00:23:41.641 ************************************ 00:23:41.641 05:43:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:41.641 05:43:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:41.641 05:43:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:41.641 05:43:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:41.900 ************************************ 00:23:41.900 START TEST nvmf_fips 00:23:41.900 ************************************ 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:41.900 * Looking for test storage... 00:23:41.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:41.900 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:23:41.901 Error setting digest 00:23:41.901 00F2B2655D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:41.901 00F2B2655D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:41.901 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:43.815 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:43.815 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:43.815 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:43.815 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.815 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:44.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:23:44.073 00:23:44.073 --- 10.0.0.2 ping statistics --- 00:23:44.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.073 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:23:44.073 00:23:44.073 --- 10.0.0.1 ping statistics --- 00:23:44.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.073 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1670852 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1670852 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1670852 ']' 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:44.073 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:44.073 [2024-07-25 05:43:37.674394] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:44.073 [2024-07-25 05:43:37.674490] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.073 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.073 [2024-07-25 05:43:37.742912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.330 [2024-07-25 05:43:37.833375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.330 [2024-07-25 05:43:37.833440] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.330 [2024-07-25 05:43:37.833456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.330 [2024-07-25 05:43:37.833470] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.330 [2024-07-25 05:43:37.833482] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.330 [2024-07-25 05:43:37.833514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.894 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:44.894 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:44.894 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:44.894 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:44.894 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.151 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.151 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:45.151 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:45.151 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:45.151 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:45.151 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:45.152 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:45.152 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:45.152 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:45.152 [2024-07-25 05:43:38.842735] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.409 [2024-07-25 05:43:38.858733] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:45.409 [2024-07-25 05:43:38.858945] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.409 [2024-07-25 05:43:38.891297] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:45.409 malloc0 00:23:45.409 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.409 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1671016 00:23:45.409 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:45.409 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1671016 /var/tmp/bdevperf.sock 00:23:45.410 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1671016 ']' 00:23:45.410 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.410 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:45.410 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.410 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:45.410 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.410 [2024-07-25 05:43:38.981348] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:23:45.410 [2024-07-25 05:43:38.981430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1671016 ] 00:23:45.410 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.410 [2024-07-25 05:43:39.038158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.668 [2024-07-25 05:43:39.122118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.668 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:45.668 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:45.668 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:45.925 [2024-07-25 05:43:39.470084] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.925 [2024-07-25 05:43:39.470233] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:45.925 TLSTESTn1 00:23:45.925 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:46.181 Running I/O for 10 seconds... 00:23:56.143 00:23:56.144 Latency(us) 00:23:56.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.144 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:56.144 Verification LBA range: start 0x0 length 0x2000 00:23:56.144 TLSTESTn1 : 10.04 3176.11 12.41 0.00 0.00 40202.15 10971.21 74953.77 00:23:56.144 =================================================================================================================== 00:23:56.144 Total : 3176.11 12.41 0.00 0.00 40202.15 10971.21 74953.77 00:23:56.144 0 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:56.144 nvmf_trace.0 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1671016 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1671016 ']' 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1671016 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.144 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1671016 00:23:56.402 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:56.402 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:56.402 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1671016' 00:23:56.402 killing process with pid 1671016 00:23:56.402 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1671016 00:23:56.402 Received shutdown signal, test time was about 10.000000 seconds 00:23:56.402 00:23:56.402 Latency(us) 00:23:56.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.402 =================================================================================================================== 00:23:56.402 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.402 [2024-07-25 05:43:49.849112] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:56.402 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1671016 00:23:56.402 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:56.402 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:56.402 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:56.402 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:56.402 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:56.402 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:56.402 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:56.402 rmmod nvme_tcp 00:23:56.402 rmmod nvme_fabrics 00:23:56.688 rmmod nvme_keyring 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1670852 ']' 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1670852 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1670852 ']' 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1670852 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1670852 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1670852' 00:23:56.688 killing process with pid 1670852 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1670852 00:23:56.688 [2024-07-25 05:43:50.168189] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:56.688 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1670852 00:23:56.946 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:56.946 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:56.946 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:56.947 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:56.947 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:56.947 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.947 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.947 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.847 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.847 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:58.847 00:23:58.847 real 0m17.089s 00:23:58.847 user 0m21.628s 00:23:58.847 sys 0m5.980s 00:23:58.847 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:58.847 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:58.847 ************************************ 00:23:58.847 END TEST nvmf_fips 00:23:58.847 ************************************ 00:23:58.847 05:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:23:58.847 05:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:58.847 05:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:58.847 05:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:58.847 05:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:58.847 ************************************ 00:23:58.847 START TEST nvmf_fuzz 00:23:58.847 ************************************ 00:23:58.847 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:58.847 * Looking for test storage... 00:23:59.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.106 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.006 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:01.007 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:01.007 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:01.007 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:01.007 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:24:01.007 00:24:01.007 --- 10.0.0.2 ping statistics --- 00:24:01.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.007 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:24:01.007 00:24:01.007 --- 10.0.0.1 ping statistics --- 00:24:01.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.007 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.007 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.008 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.008 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.008 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.008 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1674261 00:24:01.008 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:01.008 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:01.008 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1674261 00:24:01.008 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1674261 ']' 00:24:01.008 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.008 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.008 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.008 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.008 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.266 Malloc0 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.266 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.267 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.267 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:01.527 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.527 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:01.527 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:33.591 Fuzzing completed. Shutting down the fuzz application 00:24:33.591 00:24:33.591 Dumping successful admin opcodes: 00:24:33.591 8, 9, 10, 24, 00:24:33.591 Dumping successful io opcodes: 00:24:33.591 0, 9, 00:24:33.591 NS: 0x200003aeff00 I/O qp, Total commands completed: 452216, total successful commands: 2628, random_seed: 2581418048 00:24:33.591 NS: 0x200003aeff00 admin qp, Total commands completed: 56528, total successful commands: 449, random_seed: 3849128128 00:24:33.591 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:33.591 Fuzzing completed. Shutting down the fuzz application 00:24:33.591 00:24:33.591 Dumping successful admin opcodes: 00:24:33.591 24, 00:24:33.591 Dumping successful io opcodes: 00:24:33.591 00:24:33.591 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3683538142 00:24:33.591 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3683659666 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:33.591 rmmod nvme_tcp 00:24:33.591 rmmod nvme_fabrics 00:24:33.591 rmmod nvme_keyring 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1674261 ']' 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1674261 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1674261 ']' 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 1674261 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1674261 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1674261' 00:24:33.591 killing process with pid 1674261 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 1674261 00:24:33.591 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 1674261 00:24:33.850 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:33.850 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:33.850 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:33.850 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.850 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:33.850 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.850 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.850 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.752 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:35.752 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:35.752 00:24:35.752 real 0m36.903s 00:24:35.752 user 0m51.191s 00:24:35.752 sys 0m15.492s 00:24:35.752 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:35.752 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:35.752 ************************************ 00:24:35.752 END TEST nvmf_fuzz 00:24:35.752 ************************************ 00:24:35.752 05:44:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:35.752 05:44:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:35.752 05:44:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:35.752 05:44:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:36.011 ************************************ 00:24:36.011 START TEST nvmf_multiconnection 00:24:36.011 ************************************ 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:36.011 * Looking for test storage... 00:24:36.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.011 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:36.012 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:37.913 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:37.913 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:37.913 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:37.913 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:37.913 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.914 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:38.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:24:38.172 00:24:38.172 --- 10.0.0.2 ping statistics --- 00:24:38.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.172 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:24:38.172 00:24:38.172 --- 10.0.0.1 ping statistics --- 00:24:38.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.172 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1679864 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1679864 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 1679864 ']' 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.172 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.172 [2024-07-25 05:44:31.704908] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:24:38.172 [2024-07-25 05:44:31.705009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.172 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.172 [2024-07-25 05:44:31.773569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.172 [2024-07-25 05:44:31.865919] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.172 [2024-07-25 05:44:31.865974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.172 [2024-07-25 05:44:31.865999] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.172 [2024-07-25 05:44:31.866012] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.172 [2024-07-25 05:44:31.866029] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.172 [2024-07-25 05:44:31.866134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.172 [2024-07-25 05:44:31.866167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.172 [2024-07-25 05:44:31.866280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.172 [2024-07-25 05:44:31.866283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.431 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.431 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:24:38.431 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.431 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.431 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.431 [2024-07-25 05:44:32.020567] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.431 Malloc1 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.431 [2024-07-25 05:44:32.081070] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.431 Malloc2 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.431 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.432 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.432 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:38.432 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.432 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.690 Malloc3 00:24:38.690 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.690 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:38.690 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.690 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.690 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 Malloc4 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 Malloc5 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 Malloc6 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 Malloc7 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.691 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.950 Malloc8 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.950 Malloc9 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.950 Malloc10 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.950 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.951 Malloc11 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.951 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:39.565 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:39.565 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:39.565 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:39.565 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:39.565 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:42.090 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:42.090 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:42.090 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:42.090 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:42.090 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:42.090 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:42.090 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.090 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:42.347 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:42.347 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:42.347 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:42.347 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:42.347 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:44.241 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:44.241 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:44.241 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:44.241 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:44.241 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:44.241 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:44.241 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:44.241 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:45.169 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:45.169 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:45.169 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:45.169 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:45.169 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:47.065 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:47.065 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:47.065 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:24:47.065 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:47.065 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:47.065 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:47.065 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:47.065 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:47.998 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:47.998 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:47.998 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:47.998 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:47.998 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:49.894 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:49.894 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:49.894 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:24:49.894 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:49.894 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:49.894 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:49.894 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.894 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:50.457 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:50.457 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:50.457 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:50.457 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:50.457 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:52.978 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:52.978 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:52.978 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:24:52.978 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:52.978 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:52.978 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:52.978 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.978 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:53.235 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:53.235 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:53.235 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:53.235 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:53.235 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:55.130 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:55.130 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:55.130 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:24:55.130 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:55.130 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:55.130 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:55.130 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.130 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:56.064 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:56.064 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:56.064 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:56.064 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:56.064 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:57.993 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:57.993 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:57.993 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:24:57.993 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:57.993 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:57.993 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:57.993 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.993 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:58.926 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:58.926 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:58.926 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:58.926 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:58.926 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:01.452 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:01.452 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:01.452 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:01.452 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:01.452 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:01.452 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:01.452 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.452 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:02.017 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:02.017 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:02.017 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:02.017 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:02.017 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:03.915 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:03.915 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:03.915 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:03.915 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:03.915 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.915 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:03.915 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.915 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:04.847 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:04.847 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.847 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.847 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:04.847 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:06.744 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:06.744 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:06.744 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:06.744 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:06.744 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.744 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:06.744 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.744 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:08.115 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:08.115 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:08.115 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:08.115 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:08.115 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:10.012 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:10.012 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:10.012 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:10.012 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:10.012 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:10.012 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:10.012 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:10.012 [global] 00:25:10.012 thread=1 00:25:10.012 invalidate=1 00:25:10.012 rw=read 00:25:10.012 time_based=1 00:25:10.012 runtime=10 00:25:10.012 ioengine=libaio 00:25:10.012 direct=1 00:25:10.012 bs=262144 00:25:10.012 iodepth=64 00:25:10.012 norandommap=1 00:25:10.012 numjobs=1 00:25:10.012 00:25:10.012 [job0] 00:25:10.012 filename=/dev/nvme0n1 00:25:10.012 [job1] 00:25:10.012 filename=/dev/nvme10n1 00:25:10.012 [job2] 00:25:10.012 filename=/dev/nvme1n1 00:25:10.012 [job3] 00:25:10.012 filename=/dev/nvme2n1 00:25:10.012 [job4] 00:25:10.012 filename=/dev/nvme3n1 00:25:10.012 [job5] 00:25:10.012 filename=/dev/nvme4n1 00:25:10.012 [job6] 00:25:10.012 filename=/dev/nvme5n1 00:25:10.012 [job7] 00:25:10.012 filename=/dev/nvme6n1 00:25:10.012 [job8] 00:25:10.012 filename=/dev/nvme7n1 00:25:10.012 [job9] 00:25:10.012 filename=/dev/nvme8n1 00:25:10.012 [job10] 00:25:10.012 filename=/dev/nvme9n1 00:25:10.012 Could not set queue depth (nvme0n1) 00:25:10.012 Could not set queue depth (nvme10n1) 00:25:10.012 Could not set queue depth (nvme1n1) 00:25:10.012 Could not set queue depth (nvme2n1) 00:25:10.012 Could not set queue depth (nvme3n1) 00:25:10.012 Could not set queue depth (nvme4n1) 00:25:10.012 Could not set queue depth (nvme5n1) 00:25:10.012 Could not set queue depth (nvme6n1) 00:25:10.012 Could not set queue depth (nvme7n1) 00:25:10.012 Could not set queue depth (nvme8n1) 00:25:10.012 Could not set queue depth (nvme9n1) 00:25:10.012 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.012 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.012 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.012 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.012 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.012 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.012 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.012 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.012 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.012 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.012 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.012 fio-3.35 00:25:10.012 Starting 11 threads 00:25:22.212 00:25:22.212 job0: (groupid=0, jobs=1): err= 0: pid=1684254: Thu Jul 25 05:45:14 2024 00:25:22.212 read: IOPS=704, BW=176MiB/s (185MB/s)(1781MiB/10116msec) 00:25:22.212 slat (usec): min=9, max=134329, avg=1064.34, stdev=3973.12 00:25:22.212 clat (usec): min=1257, max=257509, avg=89767.78, stdev=37624.49 00:25:22.212 lat (usec): min=1289, max=257632, avg=90832.12, stdev=38105.74 00:25:22.212 clat percentiles (msec): 00:25:22.212 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 41], 20.00th=[ 63], 00:25:22.212 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 100], 00:25:22.212 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 138], 95.00th=[ 153], 00:25:22.212 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 220], 99.95th=[ 226], 00:25:22.212 | 99.99th=[ 257] 00:25:22.212 bw ( KiB/s): min=112128, max=310784, per=10.32%, avg=180699.00, stdev=55476.21, samples=20 00:25:22.212 iops : min= 438, max= 1214, avg=705.85, stdev=216.71, samples=20 00:25:22.212 lat (msec) : 2=0.04%, 4=0.48%, 10=1.26%, 20=3.40%, 50=8.47% 00:25:22.212 lat (msec) : 100=47.77%, 250=38.55%, 500=0.03% 00:25:22.212 cpu : usr=0.40%, sys=1.93%, ctx=1435, majf=0, minf=4097 00:25:22.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:22.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.213 issued rwts: total=7123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.213 job1: (groupid=0, jobs=1): err= 0: pid=1684269: Thu Jul 25 05:45:14 2024 00:25:22.213 read: IOPS=759, BW=190MiB/s (199MB/s)(1917MiB/10088msec) 00:25:22.213 slat (usec): min=13, max=85208, avg=1091.72, stdev=3911.12 00:25:22.213 clat (usec): min=1708, max=225095, avg=83063.71, stdev=49337.71 00:25:22.213 lat (usec): min=1724, max=232342, avg=84155.43, stdev=49994.87 00:25:22.213 clat percentiles (msec): 00:25:22.213 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 29], 20.00th=[ 33], 00:25:22.213 | 30.00th=[ 36], 40.00th=[ 64], 50.00th=[ 82], 60.00th=[ 102], 00:25:22.213 | 70.00th=[ 117], 80.00th=[ 132], 90.00th=[ 150], 95.00th=[ 163], 00:25:22.213 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 218], 99.95th=[ 226], 00:25:22.213 | 99.99th=[ 226] 00:25:22.213 bw ( KiB/s): min=98107, max=453120, per=11.11%, avg=194638.30, stdev=116714.22, samples=20 00:25:22.213 iops : min= 383, max= 1770, avg=760.25, stdev=455.96, samples=20 00:25:22.213 lat (msec) : 2=0.07%, 4=0.63%, 10=2.74%, 20=3.74%, 50=29.04% 00:25:22.213 lat (msec) : 100=23.39%, 250=40.40% 00:25:22.213 cpu : usr=0.51%, sys=2.35%, ctx=1685, majf=0, minf=4097 00:25:22.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:22.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.213 issued rwts: total=7666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.213 job2: (groupid=0, jobs=1): err= 0: pid=1684312: Thu Jul 25 05:45:14 2024 00:25:22.213 read: IOPS=542, BW=136MiB/s (142MB/s)(1369MiB/10086msec) 00:25:22.213 slat (usec): min=10, max=111583, avg=1522.99, stdev=5325.36 00:25:22.213 clat (msec): min=2, max=236, avg=116.28, stdev=42.74 00:25:22.213 lat (msec): min=2, max=239, avg=117.80, stdev=43.38 00:25:22.213 clat percentiles (msec): 00:25:22.213 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 49], 20.00th=[ 93], 00:25:22.213 | 30.00th=[ 104], 40.00th=[ 112], 50.00th=[ 122], 60.00th=[ 132], 00:25:22.213 | 70.00th=[ 140], 80.00th=[ 150], 90.00th=[ 163], 95.00th=[ 174], 00:25:22.213 | 99.00th=[ 209], 99.50th=[ 215], 99.90th=[ 222], 99.95th=[ 224], 00:25:22.213 | 99.99th=[ 236] 00:25:22.213 bw ( KiB/s): min=84480, max=232960, per=7.91%, avg=138578.00, stdev=36085.50, samples=20 00:25:22.213 iops : min= 330, max= 910, avg=541.30, stdev=140.97, samples=20 00:25:22.213 lat (msec) : 4=0.77%, 10=1.92%, 20=1.59%, 50=5.81%, 100=16.91% 00:25:22.213 lat (msec) : 250=73.01% 00:25:22.213 cpu : usr=0.40%, sys=1.71%, ctx=1160, majf=0, minf=4097 00:25:22.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:22.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.213 issued rwts: total=5476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.213 job3: (groupid=0, jobs=1): err= 0: pid=1684329: Thu Jul 25 05:45:14 2024 00:25:22.213 read: IOPS=713, BW=178MiB/s (187MB/s)(1800MiB/10090msec) 00:25:22.213 slat (usec): min=9, max=63306, avg=922.05, stdev=3462.50 00:25:22.213 clat (usec): min=881, max=210351, avg=88707.92, stdev=43820.15 00:25:22.213 lat (usec): min=898, max=210380, avg=89629.98, stdev=44320.64 00:25:22.213 clat percentiles (msec): 00:25:22.213 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 17], 20.00th=[ 55], 00:25:22.213 | 30.00th=[ 71], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 102], 00:25:22.213 | 70.00th=[ 115], 80.00th=[ 130], 90.00th=[ 144], 95.00th=[ 157], 00:25:22.213 | 99.00th=[ 171], 99.50th=[ 176], 99.90th=[ 203], 99.95th=[ 207], 00:25:22.213 | 99.99th=[ 211] 00:25:22.213 bw ( KiB/s): min=122880, max=308224, per=10.43%, avg=182664.35, stdev=60102.37, samples=20 00:25:22.213 iops : min= 480, max= 1204, avg=713.45, stdev=234.75, samples=20 00:25:22.213 lat (usec) : 1000=0.03% 00:25:22.213 lat (msec) : 2=0.18%, 4=0.94%, 10=3.78%, 20=6.96%, 50=6.85% 00:25:22.213 lat (msec) : 100=40.47%, 250=40.79% 00:25:22.213 cpu : usr=0.34%, sys=2.02%, ctx=1636, majf=0, minf=4097 00:25:22.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:22.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.213 issued rwts: total=7200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.213 job4: (groupid=0, jobs=1): err= 0: pid=1684342: Thu Jul 25 05:45:14 2024 00:25:22.213 read: IOPS=632, BW=158MiB/s (166MB/s)(1602MiB/10121msec) 00:25:22.213 slat (usec): min=9, max=109582, avg=1265.08, stdev=4253.61 00:25:22.213 clat (msec): min=2, max=294, avg=99.78, stdev=37.72 00:25:22.213 lat (msec): min=2, max=294, avg=101.05, stdev=38.18 00:25:22.213 clat percentiles (msec): 00:25:22.213 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 52], 20.00th=[ 69], 00:25:22.213 | 30.00th=[ 84], 40.00th=[ 92], 50.00th=[ 100], 60.00th=[ 108], 00:25:22.213 | 70.00th=[ 116], 80.00th=[ 129], 90.00th=[ 146], 95.00th=[ 163], 00:25:22.213 | 99.00th=[ 205], 99.50th=[ 220], 99.90th=[ 239], 99.95th=[ 245], 00:25:22.213 | 99.99th=[ 296] 00:25:22.213 bw ( KiB/s): min=73216, max=287744, per=9.27%, avg=162366.35, stdev=49459.88, samples=20 00:25:22.213 iops : min= 286, max= 1124, avg=634.20, stdev=193.22, samples=20 00:25:22.213 lat (msec) : 4=0.12%, 10=0.44%, 20=1.89%, 50=6.54%, 100=41.57% 00:25:22.213 lat (msec) : 250=49.41%, 500=0.03% 00:25:22.213 cpu : usr=0.30%, sys=2.08%, ctx=1230, majf=0, minf=4097 00:25:22.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:22.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.213 issued rwts: total=6406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.213 job5: (groupid=0, jobs=1): err= 0: pid=1684381: Thu Jul 25 05:45:14 2024 00:25:22.213 read: IOPS=625, BW=156MiB/s (164MB/s)(1585MiB/10127msec) 00:25:22.213 slat (usec): min=10, max=122352, avg=1146.23, stdev=4801.94 00:25:22.213 clat (usec): min=1122, max=254233, avg=101039.04, stdev=51352.36 00:25:22.213 lat (usec): min=1145, max=283112, avg=102185.27, stdev=52027.97 00:25:22.213 clat percentiles (msec): 00:25:22.213 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 30], 20.00th=[ 40], 00:25:22.213 | 30.00th=[ 66], 40.00th=[ 101], 50.00th=[ 114], 60.00th=[ 124], 00:25:22.213 | 70.00th=[ 136], 80.00th=[ 146], 90.00th=[ 159], 95.00th=[ 174], 00:25:22.213 | 99.00th=[ 209], 99.50th=[ 220], 99.90th=[ 234], 99.95th=[ 234], 00:25:22.213 | 99.99th=[ 255] 00:25:22.213 bw ( KiB/s): min=107008, max=315904, per=9.17%, avg=160628.45, stdev=54506.28, samples=20 00:25:22.213 iops : min= 418, max= 1234, avg=627.45, stdev=212.92, samples=20 00:25:22.213 lat (msec) : 2=0.08%, 4=0.36%, 10=2.32%, 20=2.86%, 50=20.38% 00:25:22.213 lat (msec) : 100=13.51%, 250=60.46%, 500=0.03% 00:25:22.213 cpu : usr=0.37%, sys=1.82%, ctx=1384, majf=0, minf=4097 00:25:22.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:22.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.213 issued rwts: total=6338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.213 job6: (groupid=0, jobs=1): err= 0: pid=1684385: Thu Jul 25 05:45:14 2024 00:25:22.213 read: IOPS=610, BW=153MiB/s (160MB/s)(1547MiB/10131msec) 00:25:22.213 slat (usec): min=10, max=168600, avg=1401.69, stdev=5630.69 00:25:22.213 clat (usec): min=1140, max=362953, avg=103313.64, stdev=45463.91 00:25:22.213 lat (usec): min=1188, max=362987, avg=104715.34, stdev=46153.83 00:25:22.213 clat percentiles (msec): 00:25:22.213 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 42], 20.00th=[ 68], 00:25:22.213 | 30.00th=[ 86], 40.00th=[ 97], 50.00th=[ 105], 60.00th=[ 114], 00:25:22.213 | 70.00th=[ 129], 80.00th=[ 142], 90.00th=[ 159], 95.00th=[ 171], 00:25:22.213 | 99.00th=[ 211], 99.50th=[ 218], 99.90th=[ 251], 99.95th=[ 330], 00:25:22.213 | 99.99th=[ 363] 00:25:22.213 bw ( KiB/s): min=94720, max=293301, per=8.95%, avg=156757.35, stdev=47451.72, samples=20 00:25:22.213 iops : min= 370, max= 1145, avg=612.25, stdev=185.29, samples=20 00:25:22.213 lat (msec) : 2=0.18%, 4=0.79%, 10=2.57%, 20=2.72%, 50=7.29% 00:25:22.213 lat (msec) : 100=30.45%, 250=55.89%, 500=0.11% 00:25:22.213 cpu : usr=0.49%, sys=1.93%, ctx=1358, majf=0, minf=4097 00:25:22.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:22.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.213 issued rwts: total=6187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.213 job7: (groupid=0, jobs=1): err= 0: pid=1684386: Thu Jul 25 05:45:14 2024 00:25:22.213 read: IOPS=651, BW=163MiB/s (171MB/s)(1634MiB/10027msec) 00:25:22.213 slat (usec): min=12, max=138403, avg=1267.53, stdev=4264.40 00:25:22.213 clat (msec): min=2, max=274, avg=96.83, stdev=36.69 00:25:22.213 lat (msec): min=2, max=274, avg=98.10, stdev=37.15 00:25:22.213 clat percentiles (msec): 00:25:22.213 | 1.00th=[ 30], 5.00th=[ 33], 10.00th=[ 43], 20.00th=[ 68], 00:25:22.213 | 30.00th=[ 82], 40.00th=[ 91], 50.00th=[ 99], 60.00th=[ 106], 00:25:22.213 | 70.00th=[ 114], 80.00th=[ 124], 90.00th=[ 138], 95.00th=[ 150], 00:25:22.213 | 99.00th=[ 211], 99.50th=[ 249], 99.90th=[ 251], 99.95th=[ 251], 00:25:22.213 | 99.99th=[ 275] 00:25:22.213 bw ( KiB/s): min=99840, max=319488, per=9.46%, avg=165719.55, stdev=52239.27, samples=20 00:25:22.213 iops : min= 390, max= 1248, avg=647.30, stdev=204.07, samples=20 00:25:22.213 lat (msec) : 4=0.05%, 10=0.21%, 50=11.72%, 100=40.31%, 250=47.50% 00:25:22.214 lat (msec) : 500=0.21% 00:25:22.214 cpu : usr=0.28%, sys=2.24%, ctx=1381, majf=0, minf=4097 00:25:22.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:22.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.214 issued rwts: total=6537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.214 job8: (groupid=0, jobs=1): err= 0: pid=1684387: Thu Jul 25 05:45:14 2024 00:25:22.214 read: IOPS=553, BW=138MiB/s (145MB/s)(1387MiB/10021msec) 00:25:22.214 slat (usec): min=9, max=74095, avg=1648.77, stdev=4885.42 00:25:22.214 clat (msec): min=16, max=242, avg=113.85, stdev=37.39 00:25:22.214 lat (msec): min=25, max=242, avg=115.50, stdev=38.01 00:25:22.214 clat percentiles (msec): 00:25:22.214 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 62], 20.00th=[ 85], 00:25:22.214 | 30.00th=[ 97], 40.00th=[ 105], 50.00th=[ 114], 60.00th=[ 126], 00:25:22.214 | 70.00th=[ 138], 80.00th=[ 148], 90.00th=[ 157], 95.00th=[ 167], 00:25:22.214 | 99.00th=[ 203], 99.50th=[ 213], 99.90th=[ 230], 99.95th=[ 236], 00:25:22.214 | 99.99th=[ 243] 00:25:22.214 bw ( KiB/s): min=88576, max=300544, per=8.02%, avg=140431.45, stdev=46775.22, samples=20 00:25:22.214 iops : min= 346, max= 1174, avg=548.55, stdev=182.72, samples=20 00:25:22.214 lat (msec) : 20=0.02%, 50=6.70%, 100=27.01%, 250=66.26% 00:25:22.214 cpu : usr=0.35%, sys=1.89%, ctx=1168, majf=0, minf=3721 00:25:22.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:22.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.214 issued rwts: total=5549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.214 job9: (groupid=0, jobs=1): err= 0: pid=1684388: Thu Jul 25 05:45:14 2024 00:25:22.214 read: IOPS=557, BW=139MiB/s (146MB/s)(1407MiB/10089msec) 00:25:22.214 slat (usec): min=9, max=54790, avg=1299.49, stdev=4260.27 00:25:22.214 clat (msec): min=21, max=233, avg=113.37, stdev=36.53 00:25:22.214 lat (msec): min=21, max=256, avg=114.67, stdev=37.09 00:25:22.214 clat percentiles (msec): 00:25:22.214 | 1.00th=[ 38], 5.00th=[ 53], 10.00th=[ 61], 20.00th=[ 83], 00:25:22.214 | 30.00th=[ 95], 40.00th=[ 104], 50.00th=[ 112], 60.00th=[ 124], 00:25:22.214 | 70.00th=[ 136], 80.00th=[ 148], 90.00th=[ 159], 95.00th=[ 169], 00:25:22.214 | 99.00th=[ 203], 99.50th=[ 215], 99.90th=[ 232], 99.95th=[ 234], 00:25:22.214 | 99.99th=[ 234] 00:25:22.214 bw ( KiB/s): min=91136, max=265197, per=8.13%, avg=142411.85, stdev=41471.63, samples=20 00:25:22.214 iops : min= 356, max= 1035, avg=556.25, stdev=161.85, samples=20 00:25:22.214 lat (msec) : 50=3.64%, 100=32.77%, 250=63.59% 00:25:22.214 cpu : usr=0.26%, sys=1.75%, ctx=1339, majf=0, minf=4097 00:25:22.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:22.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.214 issued rwts: total=5627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.214 job10: (groupid=0, jobs=1): err= 0: pid=1684389: Thu Jul 25 05:45:14 2024 00:25:22.214 read: IOPS=514, BW=129MiB/s (135MB/s)(1302MiB/10125msec) 00:25:22.214 slat (usec): min=10, max=57533, avg=1580.00, stdev=4469.67 00:25:22.214 clat (msec): min=3, max=260, avg=122.79, stdev=33.02 00:25:22.214 lat (msec): min=3, max=260, avg=124.37, stdev=33.40 00:25:22.214 clat percentiles (msec): 00:25:22.214 | 1.00th=[ 16], 5.00th=[ 55], 10.00th=[ 95], 20.00th=[ 105], 00:25:22.214 | 30.00th=[ 111], 40.00th=[ 116], 50.00th=[ 123], 60.00th=[ 129], 00:25:22.214 | 70.00th=[ 138], 80.00th=[ 146], 90.00th=[ 161], 95.00th=[ 174], 00:25:22.214 | 99.00th=[ 209], 99.50th=[ 220], 99.90th=[ 243], 99.95th=[ 243], 00:25:22.214 | 99.99th=[ 262] 00:25:22.214 bw ( KiB/s): min=93508, max=211456, per=7.52%, avg=131651.40, stdev=25148.46, samples=20 00:25:22.214 iops : min= 365, max= 826, avg=514.25, stdev=98.26, samples=20 00:25:22.214 lat (msec) : 4=0.02%, 10=0.71%, 20=0.61%, 50=2.96%, 100=9.93% 00:25:22.214 lat (msec) : 250=85.73%, 500=0.04% 00:25:22.214 cpu : usr=0.24%, sys=1.80%, ctx=1207, majf=0, minf=4097 00:25:22.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:22.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:22.214 issued rwts: total=5206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:22.214 00:25:22.214 Run status group 0 (all jobs): 00:25:22.214 READ: bw=1710MiB/s (1794MB/s), 129MiB/s-190MiB/s (135MB/s-199MB/s), io=16.9GiB (18.2GB), run=10021-10131msec 00:25:22.214 00:25:22.214 Disk stats (read/write): 00:25:22.214 nvme0n1: ios=14035/0, merge=0/0, ticks=1238259/0, in_queue=1238259, util=96.98% 00:25:22.214 nvme10n1: ios=15122/0, merge=0/0, ticks=1232144/0, in_queue=1232144, util=97.22% 00:25:22.214 nvme1n1: ios=10752/0, merge=0/0, ticks=1230544/0, in_queue=1230544, util=97.53% 00:25:22.214 nvme2n1: ios=14189/0, merge=0/0, ticks=1235900/0, in_queue=1235900, util=97.69% 00:25:22.214 nvme3n1: ios=12597/0, merge=0/0, ticks=1234843/0, in_queue=1234843, util=97.80% 00:25:22.214 nvme4n1: ios=12468/0, merge=0/0, ticks=1236750/0, in_queue=1236750, util=98.21% 00:25:22.214 nvme5n1: ios=12078/0, merge=0/0, ticks=1231863/0, in_queue=1231863, util=98.37% 00:25:22.214 nvme6n1: ios=12826/0, merge=0/0, ticks=1237098/0, in_queue=1237098, util=98.49% 00:25:22.214 nvme7n1: ios=10885/0, merge=0/0, ticks=1235381/0, in_queue=1235381, util=98.90% 00:25:22.214 nvme8n1: ios=11038/0, merge=0/0, ticks=1233975/0, in_queue=1233975, util=99.09% 00:25:22.214 nvme9n1: ios=10209/0, merge=0/0, ticks=1232097/0, in_queue=1232097, util=99.22% 00:25:22.214 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:22.214 [global] 00:25:22.214 thread=1 00:25:22.214 invalidate=1 00:25:22.214 rw=randwrite 00:25:22.214 time_based=1 00:25:22.214 runtime=10 00:25:22.214 ioengine=libaio 00:25:22.214 direct=1 00:25:22.214 bs=262144 00:25:22.214 iodepth=64 00:25:22.214 norandommap=1 00:25:22.214 numjobs=1 00:25:22.214 00:25:22.214 [job0] 00:25:22.214 filename=/dev/nvme0n1 00:25:22.214 [job1] 00:25:22.214 filename=/dev/nvme10n1 00:25:22.214 [job2] 00:25:22.214 filename=/dev/nvme1n1 00:25:22.214 [job3] 00:25:22.214 filename=/dev/nvme2n1 00:25:22.214 [job4] 00:25:22.214 filename=/dev/nvme3n1 00:25:22.214 [job5] 00:25:22.214 filename=/dev/nvme4n1 00:25:22.214 [job6] 00:25:22.214 filename=/dev/nvme5n1 00:25:22.214 [job7] 00:25:22.214 filename=/dev/nvme6n1 00:25:22.214 [job8] 00:25:22.214 filename=/dev/nvme7n1 00:25:22.214 [job9] 00:25:22.214 filename=/dev/nvme8n1 00:25:22.214 [job10] 00:25:22.214 filename=/dev/nvme9n1 00:25:22.214 Could not set queue depth (nvme0n1) 00:25:22.214 Could not set queue depth (nvme10n1) 00:25:22.214 Could not set queue depth (nvme1n1) 00:25:22.214 Could not set queue depth (nvme2n1) 00:25:22.214 Could not set queue depth (nvme3n1) 00:25:22.214 Could not set queue depth (nvme4n1) 00:25:22.214 Could not set queue depth (nvme5n1) 00:25:22.214 Could not set queue depth (nvme6n1) 00:25:22.214 Could not set queue depth (nvme7n1) 00:25:22.214 Could not set queue depth (nvme8n1) 00:25:22.214 Could not set queue depth (nvme9n1) 00:25:22.214 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.214 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.214 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.214 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.214 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.214 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.214 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.214 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.214 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.214 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.214 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:22.214 fio-3.35 00:25:22.214 Starting 11 threads 00:25:32.212 00:25:32.212 job0: (groupid=0, jobs=1): err= 0: pid=1686059: Thu Jul 25 05:45:25 2024 00:25:32.212 write: IOPS=482, BW=121MiB/s (126MB/s)(1229MiB/10191msec); 0 zone resets 00:25:32.212 slat (usec): min=24, max=71492, avg=1418.91, stdev=4310.99 00:25:32.212 clat (usec): min=1934, max=399803, avg=131102.18, stdev=76790.57 00:25:32.212 lat (usec): min=1997, max=399833, avg=132521.09, stdev=77823.82 00:25:32.212 clat percentiles (msec): 00:25:32.212 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 37], 20.00th=[ 61], 00:25:32.212 | 30.00th=[ 83], 40.00th=[ 104], 50.00th=[ 120], 60.00th=[ 142], 00:25:32.212 | 70.00th=[ 163], 80.00th=[ 199], 90.00th=[ 241], 95.00th=[ 284], 00:25:32.212 | 99.00th=[ 321], 99.50th=[ 338], 99.90th=[ 388], 99.95th=[ 388], 00:25:32.212 | 99.99th=[ 401] 00:25:32.212 bw ( KiB/s): min=51200, max=239616, per=9.30%, avg=124254.40, stdev=48787.77, samples=20 00:25:32.212 iops : min= 200, max= 936, avg=485.35, stdev=190.60, samples=20 00:25:32.212 lat (msec) : 2=0.02%, 4=0.14%, 10=0.92%, 20=2.73%, 50=13.69% 00:25:32.212 lat (msec) : 100=20.87%, 250=53.69%, 500=7.95% 00:25:32.212 cpu : usr=1.44%, sys=1.78%, ctx=2952, majf=0, minf=1 00:25:32.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:32.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.212 issued rwts: total=0,4917,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.212 job1: (groupid=0, jobs=1): err= 0: pid=1686068: Thu Jul 25 05:45:25 2024 00:25:32.212 write: IOPS=395, BW=98.8MiB/s (104MB/s)(1007MiB/10192msec); 0 zone resets 00:25:32.212 slat (usec): min=16, max=47413, avg=1926.06, stdev=5127.01 00:25:32.212 clat (usec): min=1295, max=404990, avg=159934.89, stdev=95776.06 00:25:32.212 lat (usec): min=1369, max=405024, avg=161860.95, stdev=97268.72 00:25:32.212 clat percentiles (msec): 00:25:32.212 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 25], 20.00th=[ 64], 00:25:32.212 | 30.00th=[ 108], 40.00th=[ 132], 50.00th=[ 153], 60.00th=[ 180], 00:25:32.212 | 70.00th=[ 211], 80.00th=[ 257], 90.00th=[ 292], 95.00th=[ 321], 00:25:32.212 | 99.00th=[ 372], 99.50th=[ 384], 99.90th=[ 393], 99.95th=[ 393], 00:25:32.212 | 99.99th=[ 405] 00:25:32.212 bw ( KiB/s): min=47104, max=221184, per=7.60%, avg=101468.70, stdev=47602.57, samples=20 00:25:32.212 iops : min= 184, max= 864, avg=396.35, stdev=185.95, samples=20 00:25:32.212 lat (msec) : 2=0.10%, 4=0.57%, 10=2.73%, 20=5.12%, 50=8.59% 00:25:32.212 lat (msec) : 100=10.45%, 250=50.19%, 500=22.25% 00:25:32.212 cpu : usr=1.46%, sys=1.36%, ctx=2189, majf=0, minf=1 00:25:32.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:32.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.212 issued rwts: total=0,4027,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.212 job2: (groupid=0, jobs=1): err= 0: pid=1686075: Thu Jul 25 05:45:25 2024 00:25:32.212 write: IOPS=431, BW=108MiB/s (113MB/s)(1099MiB/10190msec); 0 zone resets 00:25:32.212 slat (usec): min=15, max=125366, avg=1588.86, stdev=5814.15 00:25:32.212 clat (usec): min=1192, max=486986, avg=146660.01, stdev=109169.27 00:25:32.213 lat (usec): min=1281, max=487030, avg=148248.87, stdev=110735.66 00:25:32.213 clat percentiles (msec): 00:25:32.213 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 21], 20.00th=[ 46], 00:25:32.213 | 30.00th=[ 68], 40.00th=[ 101], 50.00th=[ 126], 60.00th=[ 150], 00:25:32.213 | 70.00th=[ 201], 80.00th=[ 253], 90.00th=[ 300], 95.00th=[ 334], 00:25:32.213 | 99.00th=[ 464], 99.50th=[ 472], 99.90th=[ 485], 99.95th=[ 489], 00:25:32.213 | 99.99th=[ 489] 00:25:32.213 bw ( KiB/s): min=36864, max=221184, per=8.31%, avg=110951.70, stdev=51579.88, samples=20 00:25:32.213 iops : min= 144, max= 864, avg=433.40, stdev=201.48, samples=20 00:25:32.213 lat (msec) : 2=0.16%, 4=0.61%, 10=4.57%, 20=4.32%, 50=13.24% 00:25:32.213 lat (msec) : 100=16.83%, 250=39.66%, 500=20.60% 00:25:32.213 cpu : usr=1.07%, sys=1.60%, ctx=2847, majf=0, minf=1 00:25:32.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:32.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.213 issued rwts: total=0,4397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.213 job3: (groupid=0, jobs=1): err= 0: pid=1686076: Thu Jul 25 05:45:25 2024 00:25:32.213 write: IOPS=507, BW=127MiB/s (133MB/s)(1296MiB/10219msec); 0 zone resets 00:25:32.213 slat (usec): min=24, max=132517, avg=1416.29, stdev=4716.56 00:25:32.213 clat (usec): min=1347, max=461993, avg=124645.21, stdev=94335.74 00:25:32.213 lat (usec): min=1441, max=462035, avg=126061.50, stdev=95537.50 00:25:32.213 clat percentiles (msec): 00:25:32.213 | 1.00th=[ 8], 5.00th=[ 12], 10.00th=[ 22], 20.00th=[ 50], 00:25:32.213 | 30.00th=[ 61], 40.00th=[ 79], 50.00th=[ 91], 60.00th=[ 122], 00:25:32.213 | 70.00th=[ 159], 80.00th=[ 213], 90.00th=[ 271], 95.00th=[ 309], 00:25:32.213 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 451], 99.95th=[ 451], 00:25:32.213 | 99.99th=[ 464] 00:25:32.213 bw ( KiB/s): min=47104, max=283136, per=9.81%, avg=131074.60, stdev=74005.54, samples=20 00:25:32.213 iops : min= 184, max= 1106, avg=511.95, stdev=289.13, samples=20 00:25:32.213 lat (msec) : 2=0.06%, 4=0.27%, 10=1.58%, 20=7.35%, 50=11.09% 00:25:32.213 lat (msec) : 100=34.07%, 250=32.05%, 500=13.52% 00:25:32.213 cpu : usr=1.63%, sys=1.80%, ctx=2916, majf=0, minf=1 00:25:32.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:32.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.213 issued rwts: total=0,5183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.213 job4: (groupid=0, jobs=1): err= 0: pid=1686077: Thu Jul 25 05:45:25 2024 00:25:32.213 write: IOPS=535, BW=134MiB/s (140MB/s)(1368MiB/10221msec); 0 zone resets 00:25:32.213 slat (usec): min=18, max=127833, avg=1252.01, stdev=4872.30 00:25:32.213 clat (usec): min=1098, max=526260, avg=118230.10, stdev=100100.09 00:25:32.213 lat (usec): min=1153, max=526303, avg=119482.11, stdev=101383.60 00:25:32.213 clat percentiles (msec): 00:25:32.213 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 17], 20.00th=[ 32], 00:25:32.213 | 30.00th=[ 51], 40.00th=[ 77], 50.00th=[ 88], 60.00th=[ 107], 00:25:32.213 | 70.00th=[ 157], 80.00th=[ 192], 90.00th=[ 253], 95.00th=[ 330], 00:25:32.213 | 99.00th=[ 477], 99.50th=[ 493], 99.90th=[ 510], 99.95th=[ 510], 00:25:32.213 | 99.99th=[ 527] 00:25:32.213 bw ( KiB/s): min=38912, max=274432, per=10.37%, avg=138463.30, stdev=77803.12, samples=20 00:25:32.213 iops : min= 152, max= 1072, avg=540.85, stdev=303.94, samples=20 00:25:32.213 lat (msec) : 2=0.26%, 4=0.82%, 10=5.12%, 20=6.25%, 50=17.45% 00:25:32.213 lat (msec) : 100=26.92%, 250=33.02%, 500=9.72%, 750=0.44% 00:25:32.213 cpu : usr=1.42%, sys=1.82%, ctx=3438, majf=0, minf=1 00:25:32.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:32.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.213 issued rwts: total=0,5472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.213 job5: (groupid=0, jobs=1): err= 0: pid=1686078: Thu Jul 25 05:45:25 2024 00:25:32.213 write: IOPS=452, BW=113MiB/s (119MB/s)(1157MiB/10218msec); 0 zone resets 00:25:32.213 slat (usec): min=18, max=134178, avg=1560.18, stdev=5021.33 00:25:32.213 clat (usec): min=1029, max=505747, avg=139325.35, stdev=89735.41 00:25:32.213 lat (usec): min=1075, max=505790, avg=140885.52, stdev=90771.06 00:25:32.213 clat percentiles (msec): 00:25:32.213 | 1.00th=[ 5], 5.00th=[ 33], 10.00th=[ 51], 20.00th=[ 74], 00:25:32.213 | 30.00th=[ 82], 40.00th=[ 97], 50.00th=[ 121], 60.00th=[ 140], 00:25:32.213 | 70.00th=[ 169], 80.00th=[ 203], 90.00th=[ 251], 95.00th=[ 309], 00:25:32.213 | 99.00th=[ 477], 99.50th=[ 498], 99.90th=[ 498], 99.95th=[ 506], 00:25:32.213 | 99.99th=[ 506] 00:25:32.213 bw ( KiB/s): min=32768, max=205312, per=8.75%, avg=116873.95, stdev=47592.68, samples=20 00:25:32.213 iops : min= 128, max= 802, avg=456.50, stdev=185.87, samples=20 00:25:32.213 lat (msec) : 2=0.32%, 4=0.58%, 10=1.10%, 20=1.43%, 50=6.40% 00:25:32.213 lat (msec) : 100=31.03%, 250=49.16%, 500=9.92%, 750=0.06% 00:25:32.213 cpu : usr=1.27%, sys=1.56%, ctx=2424, majf=0, minf=1 00:25:32.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:32.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.213 issued rwts: total=0,4628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.213 job6: (groupid=0, jobs=1): err= 0: pid=1686079: Thu Jul 25 05:45:25 2024 00:25:32.213 write: IOPS=556, BW=139MiB/s (146MB/s)(1421MiB/10222msec); 0 zone resets 00:25:32.213 slat (usec): min=16, max=127291, avg=1235.04, stdev=4131.51 00:25:32.213 clat (usec): min=1417, max=459787, avg=113745.24, stdev=71224.37 00:25:32.213 lat (usec): min=1538, max=459814, avg=114980.28, stdev=71982.53 00:25:32.213 clat percentiles (msec): 00:25:32.213 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 28], 20.00th=[ 47], 00:25:32.213 | 30.00th=[ 67], 40.00th=[ 83], 50.00th=[ 102], 60.00th=[ 132], 00:25:32.213 | 70.00th=[ 150], 80.00th=[ 178], 90.00th=[ 213], 95.00th=[ 236], 00:25:32.213 | 99.00th=[ 275], 99.50th=[ 355], 99.90th=[ 447], 99.95th=[ 447], 00:25:32.213 | 99.99th=[ 460] 00:25:32.213 bw ( KiB/s): min=63488, max=361472, per=10.77%, avg=143886.70, stdev=66193.45, samples=20 00:25:32.213 iops : min= 248, max= 1412, avg=562.05, stdev=258.57, samples=20 00:25:32.213 lat (msec) : 2=0.14%, 4=0.86%, 10=2.46%, 20=3.71%, 50=16.08% 00:25:32.213 lat (msec) : 100=26.39%, 250=47.55%, 500=2.80% 00:25:32.213 cpu : usr=1.58%, sys=1.78%, ctx=3127, majf=0, minf=1 00:25:32.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:32.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.213 issued rwts: total=0,5684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.213 job7: (groupid=0, jobs=1): err= 0: pid=1686080: Thu Jul 25 05:45:25 2024 00:25:32.213 write: IOPS=487, BW=122MiB/s (128MB/s)(1240MiB/10177msec); 0 zone resets 00:25:32.213 slat (usec): min=19, max=82165, avg=1347.12, stdev=4304.46 00:25:32.213 clat (usec): min=1380, max=418651, avg=129947.21, stdev=88670.08 00:25:32.213 lat (usec): min=1421, max=421998, avg=131294.33, stdev=89838.39 00:25:32.213 clat percentiles (msec): 00:25:32.213 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 35], 20.00th=[ 50], 00:25:32.213 | 30.00th=[ 59], 40.00th=[ 81], 50.00th=[ 112], 60.00th=[ 146], 00:25:32.213 | 70.00th=[ 184], 80.00th=[ 211], 90.00th=[ 249], 95.00th=[ 305], 00:25:32.213 | 99.00th=[ 359], 99.50th=[ 397], 99.90th=[ 414], 99.95th=[ 418], 00:25:32.213 | 99.99th=[ 418] 00:25:32.213 bw ( KiB/s): min=51200, max=265728, per=9.38%, avg=125312.00, stdev=57935.92, samples=20 00:25:32.213 iops : min= 200, max= 1038, avg=489.50, stdev=226.31, samples=20 00:25:32.213 lat (msec) : 2=0.08%, 4=0.26%, 10=3.07%, 20=1.92%, 50=15.19% 00:25:32.213 lat (msec) : 100=25.84%, 250=43.99%, 500=9.66% 00:25:32.213 cpu : usr=1.55%, sys=1.76%, ctx=3095, majf=0, minf=1 00:25:32.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:32.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.213 issued rwts: total=0,4958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.213 job8: (groupid=0, jobs=1): err= 0: pid=1686081: Thu Jul 25 05:45:25 2024 00:25:32.213 write: IOPS=441, BW=110MiB/s (116MB/s)(1125MiB/10193msec); 0 zone resets 00:25:32.213 slat (usec): min=15, max=60306, avg=1589.25, stdev=4703.11 00:25:32.213 clat (usec): min=1460, max=409705, avg=143296.36, stdev=95350.62 00:25:32.213 lat (usec): min=1523, max=409750, avg=144885.60, stdev=96765.33 00:25:32.213 clat percentiles (msec): 00:25:32.213 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 25], 20.00th=[ 57], 00:25:32.213 | 30.00th=[ 87], 40.00th=[ 111], 50.00th=[ 130], 60.00th=[ 148], 00:25:32.213 | 70.00th=[ 180], 80.00th=[ 205], 90.00th=[ 309], 95.00th=[ 338], 00:25:32.213 | 99.00th=[ 384], 99.50th=[ 393], 99.90th=[ 405], 99.95th=[ 409], 00:25:32.213 | 99.99th=[ 409] 00:25:32.213 bw ( KiB/s): min=45056, max=225792, per=8.50%, avg=113577.40, stdev=50313.07, samples=20 00:25:32.213 iops : min= 176, max= 882, avg=443.65, stdev=196.54, samples=20 00:25:32.213 lat (msec) : 2=0.09%, 4=0.58%, 10=2.69%, 20=4.53%, 50=10.00% 00:25:32.214 lat (msec) : 100=16.38%, 250=51.00%, 500=14.73% 00:25:32.214 cpu : usr=1.40%, sys=1.37%, ctx=2723, majf=0, minf=1 00:25:32.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:32.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.214 issued rwts: total=0,4500,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.214 job9: (groupid=0, jobs=1): err= 0: pid=1686082: Thu Jul 25 05:45:25 2024 00:25:32.214 write: IOPS=499, BW=125MiB/s (131MB/s)(1276MiB/10221msec); 0 zone resets 00:25:32.214 slat (usec): min=15, max=104659, avg=1055.64, stdev=3889.88 00:25:32.214 clat (usec): min=1401, max=477055, avg=127041.43, stdev=88037.68 00:25:32.214 lat (usec): min=1442, max=479196, avg=128097.08, stdev=88941.88 00:25:32.214 clat percentiles (msec): 00:25:32.214 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 31], 20.00th=[ 48], 00:25:32.214 | 30.00th=[ 71], 40.00th=[ 97], 50.00th=[ 113], 60.00th=[ 132], 00:25:32.214 | 70.00th=[ 148], 80.00th=[ 190], 90.00th=[ 257], 95.00th=[ 296], 00:25:32.214 | 99.00th=[ 405], 99.50th=[ 439], 99.90th=[ 460], 99.95th=[ 468], 00:25:32.214 | 99.99th=[ 477] 00:25:32.214 bw ( KiB/s): min=65024, max=207872, per=9.66%, avg=129059.95, stdev=41640.07, samples=20 00:25:32.214 iops : min= 254, max= 812, avg=504.10, stdev=162.66, samples=20 00:25:32.214 lat (msec) : 2=0.02%, 4=0.25%, 10=2.14%, 20=4.04%, 50=15.65% 00:25:32.214 lat (msec) : 100=19.55%, 250=46.55%, 500=11.79% 00:25:32.214 cpu : usr=1.65%, sys=1.82%, ctx=3484, majf=0, minf=1 00:25:32.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:32.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.214 issued rwts: total=0,5104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.214 job10: (groupid=0, jobs=1): err= 0: pid=1686089: Thu Jul 25 05:45:25 2024 00:25:32.214 write: IOPS=437, BW=109MiB/s (115MB/s)(1116MiB/10188msec); 0 zone resets 00:25:32.214 slat (usec): min=19, max=71402, avg=1251.29, stdev=4794.99 00:25:32.214 clat (usec): min=1022, max=435053, avg=144804.37, stdev=106717.96 00:25:32.214 lat (usec): min=1053, max=435089, avg=146055.66, stdev=108024.59 00:25:32.214 clat percentiles (msec): 00:25:32.214 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 17], 20.00th=[ 35], 00:25:32.214 | 30.00th=[ 63], 40.00th=[ 96], 50.00th=[ 125], 60.00th=[ 169], 00:25:32.214 | 70.00th=[ 211], 80.00th=[ 247], 90.00th=[ 292], 95.00th=[ 338], 00:25:32.214 | 99.00th=[ 401], 99.50th=[ 422], 99.90th=[ 435], 99.95th=[ 435], 00:25:32.214 | 99.99th=[ 435] 00:25:32.214 bw ( KiB/s): min=43008, max=204288, per=8.43%, avg=112614.40, stdev=42897.65, samples=20 00:25:32.214 iops : min= 168, max= 798, avg=439.90, stdev=167.57, samples=20 00:25:32.214 lat (msec) : 2=0.16%, 4=1.28%, 10=4.50%, 20=6.75%, 50=12.03% 00:25:32.214 lat (msec) : 100=17.08%, 250=38.82%, 500=19.39% 00:25:32.214 cpu : usr=1.56%, sys=1.51%, ctx=3328, majf=0, minf=1 00:25:32.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:32.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:32.214 issued rwts: total=0,4462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:32.214 00:25:32.214 Run status group 0 (all jobs): 00:25:32.214 WRITE: bw=1304MiB/s (1368MB/s), 98.8MiB/s-139MiB/s (104MB/s-146MB/s), io=13.0GiB (14.0GB), run=10177-10222msec 00:25:32.214 00:25:32.214 Disk stats (read/write): 00:25:32.214 nvme0n1: ios=52/9817, merge=0/0, ticks=1210/1245271, in_queue=1246481, util=99.82% 00:25:32.214 nvme10n1: ios=50/8038, merge=0/0, ticks=2514/1240474, in_queue=1242988, util=100.00% 00:25:32.214 nvme1n1: ios=48/8778, merge=0/0, ticks=74/1245775, in_queue=1245849, util=97.87% 00:25:32.214 nvme2n1: ios=45/10333, merge=0/0, ticks=1779/1236145, in_queue=1237924, util=100.00% 00:25:32.214 nvme3n1: ios=0/10909, merge=0/0, ticks=0/1244488, in_queue=1244488, util=97.83% 00:25:32.214 nvme4n1: ios=46/9224, merge=0/0, ticks=2257/1235764, in_queue=1238021, util=100.00% 00:25:32.214 nvme5n1: ios=49/11332, merge=0/0, ticks=2931/1221827, in_queue=1224758, util=100.00% 00:25:32.214 nvme6n1: ios=0/9907, merge=0/0, ticks=0/1246722, in_queue=1246722, util=98.39% 00:25:32.214 nvme7n1: ios=0/8980, merge=0/0, ticks=0/1244803, in_queue=1244803, util=98.81% 00:25:32.214 nvme8n1: ios=0/10173, merge=0/0, ticks=0/1252071, in_queue=1252071, util=99.00% 00:25:32.214 nvme9n1: ios=0/8903, merge=0/0, ticks=0/1251777, in_queue=1251777, util=99.09% 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:32.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:32.214 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.214 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:32.472 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:32.472 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:32.472 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:32.472 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:32.472 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:32.472 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:32.472 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:32.472 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:32.472 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:32.472 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.472 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.472 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.472 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.472 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:32.729 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:32.729 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:32.729 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:32.729 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:32.729 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:32.729 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:32.729 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:32.729 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:32.729 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:32.729 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.729 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.729 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.729 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.729 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:32.987 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:32.987 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:32.987 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:32.987 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:32.987 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:32.987 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:32.987 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:32.987 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:32.987 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:32.987 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.987 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.987 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.987 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.987 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:33.244 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:33.244 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:33.244 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:33.244 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:33.244 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:33.244 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:33.244 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:33.244 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:33.244 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:33.244 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.244 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.244 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.244 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.244 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:33.501 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:33.501 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:33.501 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:33.501 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:33.501 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:33.501 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:33.501 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:33.501 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:33.501 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:33.501 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.501 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.501 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.501 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.501 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:33.758 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:33.758 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:33.758 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.759 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.759 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.759 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.759 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:34.017 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:34.017 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:34.017 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:34.017 rmmod nvme_tcp 00:25:34.017 rmmod nvme_fabrics 00:25:34.275 rmmod nvme_keyring 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1679864 ']' 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1679864 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 1679864 ']' 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 1679864 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1679864 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1679864' 00:25:34.275 killing process with pid 1679864 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 1679864 00:25:34.275 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 1679864 00:25:34.846 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:34.846 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:34.846 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:34.846 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:34.846 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:34.846 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.846 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.846 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.743 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:36.743 00:25:36.743 real 1m0.887s 00:25:36.743 user 3m25.956s 00:25:36.743 sys 0m23.417s 00:25:36.743 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:36.743 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:36.743 ************************************ 00:25:36.743 END TEST nvmf_multiconnection 00:25:36.743 ************************************ 00:25:36.743 05:45:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:36.743 05:45:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:36.743 05:45:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:36.743 05:45:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:36.743 ************************************ 00:25:36.743 START TEST nvmf_initiator_timeout 00:25:36.743 ************************************ 00:25:36.743 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:36.743 * Looking for test storage... 00:25:36.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:36.743 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.000 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:37.001 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:38.897 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:38.897 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:38.897 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:38.897 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:38.897 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:38.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:25:38.898 00:25:38.898 --- 10.0.0.2 ping statistics --- 00:25:38.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.898 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:25:38.898 00:25:38.898 --- 10.0.0.1 ping statistics --- 00:25:38.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.898 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1689404 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1689404 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 1689404 ']' 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.898 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.156 [2024-07-25 05:45:32.618138] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:25:39.156 [2024-07-25 05:45:32.618210] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.156 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.156 [2024-07-25 05:45:32.680587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:39.156 [2024-07-25 05:45:32.766687] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.156 [2024-07-25 05:45:32.766741] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.156 [2024-07-25 05:45:32.766754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.156 [2024-07-25 05:45:32.766765] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.156 [2024-07-25 05:45:32.766774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.156 [2024-07-25 05:45:32.766833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.156 [2024-07-25 05:45:32.766889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.156 [2024-07-25 05:45:32.766953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:39.156 [2024-07-25 05:45:32.766956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.414 Malloc0 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.414 Delay0 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.414 [2024-07-25 05:45:32.952696] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.414 [2024-07-25 05:45:32.981025] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.414 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:39.977 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:39.977 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:39.977 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.977 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:39.977 05:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:42.496 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:42.496 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:42.496 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:42.496 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:42.496 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:42.496 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:42.496 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1689707 00:25:42.496 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:42.496 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:42.496 [global] 00:25:42.496 thread=1 00:25:42.496 invalidate=1 00:25:42.496 rw=write 00:25:42.496 time_based=1 00:25:42.496 runtime=60 00:25:42.496 ioengine=libaio 00:25:42.496 direct=1 00:25:42.496 bs=4096 00:25:42.496 iodepth=1 00:25:42.496 norandommap=0 00:25:42.496 numjobs=1 00:25:42.496 00:25:42.496 verify_dump=1 00:25:42.496 verify_backlog=512 00:25:42.496 verify_state_save=0 00:25:42.496 do_verify=1 00:25:42.496 verify=crc32c-intel 00:25:42.496 [job0] 00:25:42.496 filename=/dev/nvme0n1 00:25:42.496 Could not set queue depth (nvme0n1) 00:25:42.496 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:42.496 fio-3.35 00:25:42.496 Starting 1 thread 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.022 true 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.022 true 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.022 true 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.022 true 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.022 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:48.341 true 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:48.341 true 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:48.341 true 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:48.341 true 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:48.341 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1689707 00:26:44.549 00:26:44.549 job0: (groupid=0, jobs=1): err= 0: pid=1689900: Thu Jul 25 05:46:35 2024 00:26:44.549 read: IOPS=7, BW=30.2KiB/s (30.9kB/s)(1812KiB/60007msec) 00:26:44.549 slat (usec): min=11, max=6809, avg=38.41, stdev=318.97 00:26:44.549 clat (usec): min=421, max=41055k, avg=131980.32, stdev=1926989.23 00:26:44.549 lat (msec): min=7, max=41055, avg=132.02, stdev=1926.99 00:26:44.549 clat percentiles (msec): 00:26:44.549 | 1.00th=[ 41], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:26:44.549 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:26:44.549 | 70.00th=[ 43], 80.00th=[ 43], 90.00th=[ 43], 95.00th=[ 43], 00:26:44.549 | 99.00th=[ 43], 99.50th=[ 43], 99.90th=[17113], 99.95th=[17113], 00:26:44.549 | 99.99th=[17113] 00:26:44.549 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60007msec); 0 zone resets 00:26:44.549 slat (usec): min=9, max=27756, avg=77.87, stdev=1225.68 00:26:44.549 clat (usec): min=220, max=409, avg=297.91, stdev=39.73 00:26:44.549 lat (usec): min=231, max=28028, avg=375.78, stdev=1225.34 00:26:44.549 clat percentiles (usec): 00:26:44.549 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 245], 20.00th=[ 260], 00:26:44.549 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:26:44.549 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 359], 95.00th=[ 371], 00:26:44.549 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 412], 99.95th=[ 412], 00:26:44.549 | 99.99th=[ 412] 00:26:44.549 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:26:44.549 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:26:44.549 lat (usec) : 250=7.05%, 500=46.11% 00:26:44.549 lat (msec) : 50=46.74%, >=2000=0.10% 00:26:44.549 cpu : usr=0.03%, sys=0.05%, ctx=969, majf=0, minf=2 00:26:44.549 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:44.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.549 issued rwts: total=453,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.549 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:44.549 00:26:44.549 Run status group 0 (all jobs): 00:26:44.549 READ: bw=30.2KiB/s (30.9kB/s), 30.2KiB/s-30.2KiB/s (30.9kB/s-30.9kB/s), io=1812KiB (1855kB), run=60007-60007msec 00:26:44.549 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60007-60007msec 00:26:44.549 00:26:44.549 Disk stats (read/write): 00:26:44.549 nvme0n1: ios=502/512, merge=0/0, ticks=20062/139, in_queue=20201, util=99.87% 00:26:44.549 05:46:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:44.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:44.549 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:44.549 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:44.549 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:44.549 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:44.549 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:44.550 nvmf hotplug test: fio successful as expected 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:44.550 rmmod nvme_tcp 00:26:44.550 rmmod nvme_fabrics 00:26:44.550 rmmod nvme_keyring 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1689404 ']' 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1689404 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 1689404 ']' 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 1689404 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1689404 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1689404' 00:26:44.550 killing process with pid 1689404 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 1689404 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 1689404 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.550 05:46:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.809 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:44.809 00:26:44.809 real 1m8.095s 00:26:44.809 user 4m10.981s 00:26:44.809 sys 0m6.253s 00:26:44.809 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:44.809 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.809 ************************************ 00:26:44.809 END TEST nvmf_initiator_timeout 00:26:44.809 ************************************ 00:26:45.067 05:46:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:26:45.068 05:46:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:26:45.068 05:46:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:26:45.068 05:46:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:26:45.068 05:46:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.968 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:46.969 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:46.969 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:46.969 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:46.969 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:46.969 ************************************ 00:26:46.969 START TEST nvmf_perf_adq 00:26:46.969 ************************************ 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:46.969 * Looking for test storage... 00:26:46.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:46.969 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:46.970 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:46.970 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:48.871 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:48.871 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:48.871 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:48.872 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:48.872 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:48.872 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:49.438 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:51.339 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.636 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:56.637 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:56.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:56.637 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:56.637 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:56.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:26:56.637 00:26:56.637 --- 10.0.0.2 ping statistics --- 00:26:56.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.637 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:26:56.637 00:26:56.637 --- 10.0.0.1 ping statistics --- 00:26:56.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.637 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1701304 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1701304 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1701304 ']' 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:56.637 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.637 [2024-07-25 05:46:50.038301] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:26:56.637 [2024-07-25 05:46:50.038403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.637 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.638 [2024-07-25 05:46:50.105086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:56.638 [2024-07-25 05:46:50.194681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.638 [2024-07-25 05:46:50.194736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.638 [2024-07-25 05:46:50.194759] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.638 [2024-07-25 05:46:50.194770] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.638 [2024-07-25 05:46:50.194779] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.638 [2024-07-25 05:46:50.194878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.638 [2024-07-25 05:46:50.194901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:56.638 [2024-07-25 05:46:50.194983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:56.638 [2024-07-25 05:46:50.194985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.638 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.895 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.895 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:56.895 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.895 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.895 [2024-07-25 05:46:50.424186] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.896 Malloc1 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.896 [2024-07-25 05:46:50.474883] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1701450 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:56.896 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:56.896 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.792 05:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:58.792 05:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.792 05:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.050 05:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.050 05:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:59.050 "tick_rate": 2700000000, 00:26:59.050 "poll_groups": [ 00:26:59.050 { 00:26:59.050 "name": "nvmf_tgt_poll_group_000", 00:26:59.050 "admin_qpairs": 1, 00:26:59.050 "io_qpairs": 1, 00:26:59.050 "current_admin_qpairs": 1, 00:26:59.050 "current_io_qpairs": 1, 00:26:59.050 "pending_bdev_io": 0, 00:26:59.050 "completed_nvme_io": 21266, 00:26:59.050 "transports": [ 00:26:59.050 { 00:26:59.050 "trtype": "TCP" 00:26:59.050 } 00:26:59.050 ] 00:26:59.050 }, 00:26:59.050 { 00:26:59.050 "name": "nvmf_tgt_poll_group_001", 00:26:59.050 "admin_qpairs": 0, 00:26:59.050 "io_qpairs": 1, 00:26:59.050 "current_admin_qpairs": 0, 00:26:59.050 "current_io_qpairs": 1, 00:26:59.050 "pending_bdev_io": 0, 00:26:59.050 "completed_nvme_io": 21620, 00:26:59.050 "transports": [ 00:26:59.050 { 00:26:59.051 "trtype": "TCP" 00:26:59.051 } 00:26:59.051 ] 00:26:59.051 }, 00:26:59.051 { 00:26:59.051 "name": "nvmf_tgt_poll_group_002", 00:26:59.051 "admin_qpairs": 0, 00:26:59.051 "io_qpairs": 1, 00:26:59.051 "current_admin_qpairs": 0, 00:26:59.051 "current_io_qpairs": 1, 00:26:59.051 "pending_bdev_io": 0, 00:26:59.051 "completed_nvme_io": 18674, 00:26:59.051 "transports": [ 00:26:59.051 { 00:26:59.051 "trtype": "TCP" 00:26:59.051 } 00:26:59.051 ] 00:26:59.051 }, 00:26:59.051 { 00:26:59.051 "name": "nvmf_tgt_poll_group_003", 00:26:59.051 "admin_qpairs": 0, 00:26:59.051 "io_qpairs": 1, 00:26:59.051 "current_admin_qpairs": 0, 00:26:59.051 "current_io_qpairs": 1, 00:26:59.051 "pending_bdev_io": 0, 00:26:59.051 "completed_nvme_io": 19329, 00:26:59.051 "transports": [ 00:26:59.051 { 00:26:59.051 "trtype": "TCP" 00:26:59.051 } 00:26:59.051 ] 00:26:59.051 } 00:26:59.051 ] 00:26:59.051 }' 00:26:59.051 05:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:59.051 05:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:59.051 05:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:59.051 05:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:59.051 05:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1701450 00:27:07.165 Initializing NVMe Controllers 00:27:07.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:07.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:07.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:07.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:07.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:07.165 Initialization complete. Launching workers. 00:27:07.165 ======================================================== 00:27:07.165 Latency(us) 00:27:07.165 Device Information : IOPS MiB/s Average min max 00:27:07.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10006.61 39.09 6396.54 2856.44 11053.88 00:27:07.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11107.30 43.39 5761.67 2745.14 7184.61 00:27:07.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9623.11 37.59 6650.50 3021.81 9757.20 00:27:07.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10950.30 42.77 5846.21 3909.56 7464.36 00:27:07.166 ======================================================== 00:27:07.166 Total : 41687.32 162.84 6141.45 2745.14 11053.88 00:27:07.166 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:07.166 rmmod nvme_tcp 00:27:07.166 rmmod nvme_fabrics 00:27:07.166 rmmod nvme_keyring 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1701304 ']' 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1701304 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1701304 ']' 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1701304 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1701304 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1701304' 00:27:07.166 killing process with pid 1701304 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1701304 00:27:07.166 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1701304 00:27:07.425 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:07.425 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:07.425 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:07.425 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:07.425 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:07.425 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.425 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.425 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.345 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:09.345 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:09.345 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:10.277 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:12.174 05:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:17.441 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:17.441 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.441 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:17.441 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:17.442 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:17.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:27:17.442 00:27:17.442 --- 10.0.0.2 ping statistics --- 00:27:17.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.442 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:27:17.442 00:27:17.442 --- 10.0.0.1 ping statistics --- 00:27:17.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.442 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:17.442 net.core.busy_poll = 1 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:17.442 net.core.busy_read = 1 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1704058 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1704058 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1704058 ']' 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:17.442 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.442 [2024-07-25 05:47:10.971997] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:27:17.442 [2024-07-25 05:47:10.972084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.442 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.442 [2024-07-25 05:47:11.039408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.442 [2024-07-25 05:47:11.132467] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.442 [2024-07-25 05:47:11.132537] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.442 [2024-07-25 05:47:11.132552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.442 [2024-07-25 05:47:11.132564] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.442 [2024-07-25 05:47:11.132573] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.442 [2024-07-25 05:47:11.135289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.442 [2024-07-25 05:47:11.135314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.442 [2024-07-25 05:47:11.135391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.442 [2024-07-25 05:47:11.135394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.701 [2024-07-25 05:47:11.371431] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.701 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.701 Malloc1 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.972 [2024-07-25 05:47:11.422630] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1704091 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:17.972 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:17.972 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.894 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:19.894 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.894 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.894 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.894 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:19.894 "tick_rate": 2700000000, 00:27:19.894 "poll_groups": [ 00:27:19.894 { 00:27:19.894 "name": "nvmf_tgt_poll_group_000", 00:27:19.894 "admin_qpairs": 1, 00:27:19.894 "io_qpairs": 1, 00:27:19.894 "current_admin_qpairs": 1, 00:27:19.894 "current_io_qpairs": 1, 00:27:19.894 "pending_bdev_io": 0, 00:27:19.894 "completed_nvme_io": 24898, 00:27:19.894 "transports": [ 00:27:19.894 { 00:27:19.894 "trtype": "TCP" 00:27:19.894 } 00:27:19.894 ] 00:27:19.894 }, 00:27:19.894 { 00:27:19.894 "name": "nvmf_tgt_poll_group_001", 00:27:19.894 "admin_qpairs": 0, 00:27:19.894 "io_qpairs": 3, 00:27:19.894 "current_admin_qpairs": 0, 00:27:19.894 "current_io_qpairs": 3, 00:27:19.894 "pending_bdev_io": 0, 00:27:19.894 "completed_nvme_io": 27384, 00:27:19.894 "transports": [ 00:27:19.894 { 00:27:19.894 "trtype": "TCP" 00:27:19.894 } 00:27:19.894 ] 00:27:19.894 }, 00:27:19.894 { 00:27:19.894 "name": "nvmf_tgt_poll_group_002", 00:27:19.894 "admin_qpairs": 0, 00:27:19.894 "io_qpairs": 0, 00:27:19.894 "current_admin_qpairs": 0, 00:27:19.894 "current_io_qpairs": 0, 00:27:19.894 "pending_bdev_io": 0, 00:27:19.894 "completed_nvme_io": 0, 00:27:19.894 "transports": [ 00:27:19.894 { 00:27:19.894 "trtype": "TCP" 00:27:19.894 } 00:27:19.894 ] 00:27:19.894 }, 00:27:19.894 { 00:27:19.894 "name": "nvmf_tgt_poll_group_003", 00:27:19.894 "admin_qpairs": 0, 00:27:19.894 "io_qpairs": 0, 00:27:19.894 "current_admin_qpairs": 0, 00:27:19.894 "current_io_qpairs": 0, 00:27:19.894 "pending_bdev_io": 0, 00:27:19.894 "completed_nvme_io": 0, 00:27:19.894 "transports": [ 00:27:19.894 { 00:27:19.894 "trtype": "TCP" 00:27:19.894 } 00:27:19.894 ] 00:27:19.894 } 00:27:19.894 ] 00:27:19.894 }' 00:27:19.894 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:19.894 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:19.894 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:19.894 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:19.894 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1704091 00:27:27.997 Initializing NVMe Controllers 00:27:27.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:27.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:27.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:27.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:27.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:27.997 Initialization complete. Launching workers. 00:27:27.997 ======================================================== 00:27:27.997 Latency(us) 00:27:27.997 Device Information : IOPS MiB/s Average min max 00:27:27.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13111.16 51.22 4881.08 1421.13 6909.52 00:27:27.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4838.88 18.90 13269.55 2245.35 61556.16 00:27:27.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4797.48 18.74 13369.16 2129.51 61220.54 00:27:27.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4745.98 18.54 13530.36 2454.55 61263.52 00:27:27.997 ======================================================== 00:27:27.997 Total : 27493.51 107.40 9331.65 1421.13 61556.16 00:27:27.997 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:27.997 rmmod nvme_tcp 00:27:27.997 rmmod nvme_fabrics 00:27:27.997 rmmod nvme_keyring 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1704058 ']' 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1704058 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1704058 ']' 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1704058 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1704058 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1704058' 00:27:27.997 killing process with pid 1704058 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1704058 00:27:27.997 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1704058 00:27:28.562 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:28.562 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:28.562 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:28.562 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:28.562 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:28.562 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.562 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.562 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.842 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:31.843 00:27:31.843 real 0m44.607s 00:27:31.843 user 2m30.635s 00:27:31.843 sys 0m12.788s 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.843 ************************************ 00:27:31.843 END TEST nvmf_perf_adq 00:27:31.843 ************************************ 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:31.843 ************************************ 00:27:31.843 START TEST nvmf_shutdown 00:27:31.843 ************************************ 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:31.843 * Looking for test storage... 00:27:31.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:31.843 ************************************ 00:27:31.843 START TEST nvmf_shutdown_tc1 00:27:31.843 ************************************ 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:31.843 05:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:33.745 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:33.745 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:33.745 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:33.745 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.745 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:33.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:27:33.746 00:27:33.746 --- 10.0.0.2 ping statistics --- 00:27:33.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.746 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:27:33.746 00:27:33.746 --- 10.0.0.1 ping statistics --- 00:27:33.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.746 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1707377 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1707377 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1707377 ']' 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:33.746 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:33.746 [2024-07-25 05:47:27.268402] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:27:33.746 [2024-07-25 05:47:27.268480] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.746 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.746 [2024-07-25 05:47:27.336025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:33.746 [2024-07-25 05:47:27.427235] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.746 [2024-07-25 05:47:27.427313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.746 [2024-07-25 05:47:27.427327] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.746 [2024-07-25 05:47:27.427337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.746 [2024-07-25 05:47:27.427362] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.746 [2024-07-25 05:47:27.427418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.746 [2024-07-25 05:47:27.427479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.746 [2024-07-25 05:47:27.427547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:33.746 [2024-07-25 05:47:27.427550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.004 [2024-07-25 05:47:27.587747] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.004 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.005 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.005 Malloc1 00:27:34.005 [2024-07-25 05:47:27.677614] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.005 Malloc2 00:27:34.262 Malloc3 00:27:34.262 Malloc4 00:27:34.262 Malloc5 00:27:34.262 Malloc6 00:27:34.262 Malloc7 00:27:34.521 Malloc8 00:27:34.521 Malloc9 00:27:34.521 Malloc10 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1707554 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1707554 /var/tmp/bdevperf.sock 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1707554 ']' 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:34.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.521 { 00:27:34.521 "params": { 00:27:34.521 "name": "Nvme$subsystem", 00:27:34.521 "trtype": "$TEST_TRANSPORT", 00:27:34.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.521 "adrfam": "ipv4", 00:27:34.521 "trsvcid": "$NVMF_PORT", 00:27:34.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.521 "hdgst": ${hdgst:-false}, 00:27:34.521 "ddgst": ${ddgst:-false} 00:27:34.521 }, 00:27:34.521 "method": "bdev_nvme_attach_controller" 00:27:34.521 } 00:27:34.521 EOF 00:27:34.521 )") 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.521 { 00:27:34.521 "params": { 00:27:34.521 "name": "Nvme$subsystem", 00:27:34.521 "trtype": "$TEST_TRANSPORT", 00:27:34.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.521 "adrfam": "ipv4", 00:27:34.521 "trsvcid": "$NVMF_PORT", 00:27:34.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.521 "hdgst": ${hdgst:-false}, 00:27:34.521 "ddgst": ${ddgst:-false} 00:27:34.521 }, 00:27:34.521 "method": "bdev_nvme_attach_controller" 00:27:34.521 } 00:27:34.521 EOF 00:27:34.521 )") 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.521 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.522 { 00:27:34.522 "params": { 00:27:34.522 "name": "Nvme$subsystem", 00:27:34.522 "trtype": "$TEST_TRANSPORT", 00:27:34.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.522 "adrfam": "ipv4", 00:27:34.522 "trsvcid": "$NVMF_PORT", 00:27:34.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.522 "hdgst": ${hdgst:-false}, 00:27:34.522 "ddgst": ${ddgst:-false} 00:27:34.522 }, 00:27:34.522 "method": "bdev_nvme_attach_controller" 00:27:34.522 } 00:27:34.522 EOF 00:27:34.522 )") 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.522 { 00:27:34.522 "params": { 00:27:34.522 "name": "Nvme$subsystem", 00:27:34.522 "trtype": "$TEST_TRANSPORT", 00:27:34.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.522 "adrfam": "ipv4", 00:27:34.522 "trsvcid": "$NVMF_PORT", 00:27:34.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.522 "hdgst": ${hdgst:-false}, 00:27:34.522 "ddgst": ${ddgst:-false} 00:27:34.522 }, 00:27:34.522 "method": "bdev_nvme_attach_controller" 00:27:34.522 } 00:27:34.522 EOF 00:27:34.522 )") 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.522 { 00:27:34.522 "params": { 00:27:34.522 "name": "Nvme$subsystem", 00:27:34.522 "trtype": "$TEST_TRANSPORT", 00:27:34.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.522 "adrfam": "ipv4", 00:27:34.522 "trsvcid": "$NVMF_PORT", 00:27:34.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.522 "hdgst": ${hdgst:-false}, 00:27:34.522 "ddgst": ${ddgst:-false} 00:27:34.522 }, 00:27:34.522 "method": "bdev_nvme_attach_controller" 00:27:34.522 } 00:27:34.522 EOF 00:27:34.522 )") 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.522 { 00:27:34.522 "params": { 00:27:34.522 "name": "Nvme$subsystem", 00:27:34.522 "trtype": "$TEST_TRANSPORT", 00:27:34.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.522 "adrfam": "ipv4", 00:27:34.522 "trsvcid": "$NVMF_PORT", 00:27:34.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.522 "hdgst": ${hdgst:-false}, 00:27:34.522 "ddgst": ${ddgst:-false} 00:27:34.522 }, 00:27:34.522 "method": "bdev_nvme_attach_controller" 00:27:34.522 } 00:27:34.522 EOF 00:27:34.522 )") 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.522 { 00:27:34.522 "params": { 00:27:34.522 "name": "Nvme$subsystem", 00:27:34.522 "trtype": "$TEST_TRANSPORT", 00:27:34.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.522 "adrfam": "ipv4", 00:27:34.522 "trsvcid": "$NVMF_PORT", 00:27:34.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.522 "hdgst": ${hdgst:-false}, 00:27:34.522 "ddgst": ${ddgst:-false} 00:27:34.522 }, 00:27:34.522 "method": "bdev_nvme_attach_controller" 00:27:34.522 } 00:27:34.522 EOF 00:27:34.522 )") 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.522 { 00:27:34.522 "params": { 00:27:34.522 "name": "Nvme$subsystem", 00:27:34.522 "trtype": "$TEST_TRANSPORT", 00:27:34.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.522 "adrfam": "ipv4", 00:27:34.522 "trsvcid": "$NVMF_PORT", 00:27:34.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.522 "hdgst": ${hdgst:-false}, 00:27:34.522 "ddgst": ${ddgst:-false} 00:27:34.522 }, 00:27:34.522 "method": "bdev_nvme_attach_controller" 00:27:34.522 } 00:27:34.522 EOF 00:27:34.522 )") 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.522 { 00:27:34.522 "params": { 00:27:34.522 "name": "Nvme$subsystem", 00:27:34.522 "trtype": "$TEST_TRANSPORT", 00:27:34.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.522 "adrfam": "ipv4", 00:27:34.522 "trsvcid": "$NVMF_PORT", 00:27:34.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.522 "hdgst": ${hdgst:-false}, 00:27:34.522 "ddgst": ${ddgst:-false} 00:27:34.522 }, 00:27:34.522 "method": "bdev_nvme_attach_controller" 00:27:34.522 } 00:27:34.522 EOF 00:27:34.522 )") 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.522 { 00:27:34.522 "params": { 00:27:34.522 "name": "Nvme$subsystem", 00:27:34.522 "trtype": "$TEST_TRANSPORT", 00:27:34.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.522 "adrfam": "ipv4", 00:27:34.522 "trsvcid": "$NVMF_PORT", 00:27:34.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.522 "hdgst": ${hdgst:-false}, 00:27:34.522 "ddgst": ${ddgst:-false} 00:27:34.522 }, 00:27:34.522 "method": "bdev_nvme_attach_controller" 00:27:34.522 } 00:27:34.522 EOF 00:27:34.522 )") 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:34.522 05:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:34.522 "params": { 00:27:34.522 "name": "Nvme1", 00:27:34.522 "trtype": "tcp", 00:27:34.522 "traddr": "10.0.0.2", 00:27:34.522 "adrfam": "ipv4", 00:27:34.522 "trsvcid": "4420", 00:27:34.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:34.522 "hdgst": false, 00:27:34.522 "ddgst": false 00:27:34.522 }, 00:27:34.522 "method": "bdev_nvme_attach_controller" 00:27:34.522 },{ 00:27:34.522 "params": { 00:27:34.522 "name": "Nvme2", 00:27:34.522 "trtype": "tcp", 00:27:34.522 "traddr": "10.0.0.2", 00:27:34.522 "adrfam": "ipv4", 00:27:34.522 "trsvcid": "4420", 00:27:34.522 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:34.522 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:34.522 "hdgst": false, 00:27:34.522 "ddgst": false 00:27:34.522 }, 00:27:34.522 "method": "bdev_nvme_attach_controller" 00:27:34.522 },{ 00:27:34.522 "params": { 00:27:34.522 "name": "Nvme3", 00:27:34.522 "trtype": "tcp", 00:27:34.522 "traddr": "10.0.0.2", 00:27:34.522 "adrfam": "ipv4", 00:27:34.522 "trsvcid": "4420", 00:27:34.522 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:34.522 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:34.522 "hdgst": false, 00:27:34.522 "ddgst": false 00:27:34.522 }, 00:27:34.522 "method": "bdev_nvme_attach_controller" 00:27:34.522 },{ 00:27:34.522 "params": { 00:27:34.522 "name": "Nvme4", 00:27:34.522 "trtype": "tcp", 00:27:34.522 "traddr": "10.0.0.2", 00:27:34.522 "adrfam": "ipv4", 00:27:34.522 "trsvcid": "4420", 00:27:34.522 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:34.522 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:34.522 "hdgst": false, 00:27:34.522 "ddgst": false 00:27:34.522 }, 00:27:34.522 "method": "bdev_nvme_attach_controller" 00:27:34.522 },{ 00:27:34.522 "params": { 00:27:34.522 "name": "Nvme5", 00:27:34.522 "trtype": "tcp", 00:27:34.522 "traddr": "10.0.0.2", 00:27:34.522 "adrfam": "ipv4", 00:27:34.522 "trsvcid": "4420", 00:27:34.522 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:34.523 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:34.523 "hdgst": false, 00:27:34.523 "ddgst": false 00:27:34.523 }, 00:27:34.523 "method": "bdev_nvme_attach_controller" 00:27:34.523 },{ 00:27:34.523 "params": { 00:27:34.523 "name": "Nvme6", 00:27:34.523 "trtype": "tcp", 00:27:34.523 "traddr": "10.0.0.2", 00:27:34.523 "adrfam": "ipv4", 00:27:34.523 "trsvcid": "4420", 00:27:34.523 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:34.523 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:34.523 "hdgst": false, 00:27:34.523 "ddgst": false 00:27:34.523 }, 00:27:34.523 "method": "bdev_nvme_attach_controller" 00:27:34.523 },{ 00:27:34.523 "params": { 00:27:34.523 "name": "Nvme7", 00:27:34.523 "trtype": "tcp", 00:27:34.523 "traddr": "10.0.0.2", 00:27:34.523 "adrfam": "ipv4", 00:27:34.523 "trsvcid": "4420", 00:27:34.523 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:34.523 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:34.523 "hdgst": false, 00:27:34.523 "ddgst": false 00:27:34.523 }, 00:27:34.523 "method": "bdev_nvme_attach_controller" 00:27:34.523 },{ 00:27:34.523 "params": { 00:27:34.523 "name": "Nvme8", 00:27:34.523 "trtype": "tcp", 00:27:34.523 "traddr": "10.0.0.2", 00:27:34.523 "adrfam": "ipv4", 00:27:34.523 "trsvcid": "4420", 00:27:34.523 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:34.523 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:34.523 "hdgst": false, 00:27:34.523 "ddgst": false 00:27:34.523 }, 00:27:34.523 "method": "bdev_nvme_attach_controller" 00:27:34.523 },{ 00:27:34.523 "params": { 00:27:34.523 "name": "Nvme9", 00:27:34.523 "trtype": "tcp", 00:27:34.523 "traddr": "10.0.0.2", 00:27:34.523 "adrfam": "ipv4", 00:27:34.523 "trsvcid": "4420", 00:27:34.523 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:34.523 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:34.523 "hdgst": false, 00:27:34.523 "ddgst": false 00:27:34.523 }, 00:27:34.523 "method": "bdev_nvme_attach_controller" 00:27:34.523 },{ 00:27:34.523 "params": { 00:27:34.523 "name": "Nvme10", 00:27:34.523 "trtype": "tcp", 00:27:34.523 "traddr": "10.0.0.2", 00:27:34.523 "adrfam": "ipv4", 00:27:34.523 "trsvcid": "4420", 00:27:34.523 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:34.523 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:34.523 "hdgst": false, 00:27:34.523 "ddgst": false 00:27:34.523 }, 00:27:34.523 "method": "bdev_nvme_attach_controller" 00:27:34.523 }' 00:27:34.523 [2024-07-25 05:47:28.185865] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:27:34.523 [2024-07-25 05:47:28.185947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:34.523 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.781 [2024-07-25 05:47:28.252188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.781 [2024-07-25 05:47:28.339434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.677 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:36.677 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:27:36.677 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:36.677 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.677 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:36.677 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.677 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1707554 00:27:36.677 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:36.677 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:37.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1707554 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1707377 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.612 { 00:27:37.612 "params": { 00:27:37.612 "name": "Nvme$subsystem", 00:27:37.612 "trtype": "$TEST_TRANSPORT", 00:27:37.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.612 "adrfam": "ipv4", 00:27:37.612 "trsvcid": "$NVMF_PORT", 00:27:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.612 "hdgst": ${hdgst:-false}, 00:27:37.612 "ddgst": ${ddgst:-false} 00:27:37.612 }, 00:27:37.612 "method": "bdev_nvme_attach_controller" 00:27:37.612 } 00:27:37.612 EOF 00:27:37.612 )") 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.612 { 00:27:37.612 "params": { 00:27:37.612 "name": "Nvme$subsystem", 00:27:37.612 "trtype": "$TEST_TRANSPORT", 00:27:37.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.612 "adrfam": "ipv4", 00:27:37.612 "trsvcid": "$NVMF_PORT", 00:27:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.612 "hdgst": ${hdgst:-false}, 00:27:37.612 "ddgst": ${ddgst:-false} 00:27:37.612 }, 00:27:37.612 "method": "bdev_nvme_attach_controller" 00:27:37.612 } 00:27:37.612 EOF 00:27:37.612 )") 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.612 { 00:27:37.612 "params": { 00:27:37.612 "name": "Nvme$subsystem", 00:27:37.612 "trtype": "$TEST_TRANSPORT", 00:27:37.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.612 "adrfam": "ipv4", 00:27:37.612 "trsvcid": "$NVMF_PORT", 00:27:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.612 "hdgst": ${hdgst:-false}, 00:27:37.612 "ddgst": ${ddgst:-false} 00:27:37.612 }, 00:27:37.612 "method": "bdev_nvme_attach_controller" 00:27:37.612 } 00:27:37.612 EOF 00:27:37.612 )") 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.612 { 00:27:37.612 "params": { 00:27:37.612 "name": "Nvme$subsystem", 00:27:37.612 "trtype": "$TEST_TRANSPORT", 00:27:37.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.612 "adrfam": "ipv4", 00:27:37.612 "trsvcid": "$NVMF_PORT", 00:27:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.612 "hdgst": ${hdgst:-false}, 00:27:37.612 "ddgst": ${ddgst:-false} 00:27:37.612 }, 00:27:37.612 "method": "bdev_nvme_attach_controller" 00:27:37.612 } 00:27:37.612 EOF 00:27:37.612 )") 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.612 { 00:27:37.612 "params": { 00:27:37.612 "name": "Nvme$subsystem", 00:27:37.612 "trtype": "$TEST_TRANSPORT", 00:27:37.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.612 "adrfam": "ipv4", 00:27:37.612 "trsvcid": "$NVMF_PORT", 00:27:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.612 "hdgst": ${hdgst:-false}, 00:27:37.612 "ddgst": ${ddgst:-false} 00:27:37.612 }, 00:27:37.612 "method": "bdev_nvme_attach_controller" 00:27:37.612 } 00:27:37.612 EOF 00:27:37.612 )") 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.612 { 00:27:37.612 "params": { 00:27:37.612 "name": "Nvme$subsystem", 00:27:37.612 "trtype": "$TEST_TRANSPORT", 00:27:37.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.612 "adrfam": "ipv4", 00:27:37.612 "trsvcid": "$NVMF_PORT", 00:27:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.612 "hdgst": ${hdgst:-false}, 00:27:37.612 "ddgst": ${ddgst:-false} 00:27:37.612 }, 00:27:37.612 "method": "bdev_nvme_attach_controller" 00:27:37.612 } 00:27:37.612 EOF 00:27:37.612 )") 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.612 { 00:27:37.612 "params": { 00:27:37.612 "name": "Nvme$subsystem", 00:27:37.612 "trtype": "$TEST_TRANSPORT", 00:27:37.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.612 "adrfam": "ipv4", 00:27:37.612 "trsvcid": "$NVMF_PORT", 00:27:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.612 "hdgst": ${hdgst:-false}, 00:27:37.612 "ddgst": ${ddgst:-false} 00:27:37.612 }, 00:27:37.612 "method": "bdev_nvme_attach_controller" 00:27:37.612 } 00:27:37.612 EOF 00:27:37.612 )") 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.612 { 00:27:37.612 "params": { 00:27:37.612 "name": "Nvme$subsystem", 00:27:37.612 "trtype": "$TEST_TRANSPORT", 00:27:37.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.612 "adrfam": "ipv4", 00:27:37.612 "trsvcid": "$NVMF_PORT", 00:27:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.612 "hdgst": ${hdgst:-false}, 00:27:37.612 "ddgst": ${ddgst:-false} 00:27:37.612 }, 00:27:37.612 "method": "bdev_nvme_attach_controller" 00:27:37.612 } 00:27:37.612 EOF 00:27:37.612 )") 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.612 { 00:27:37.612 "params": { 00:27:37.612 "name": "Nvme$subsystem", 00:27:37.612 "trtype": "$TEST_TRANSPORT", 00:27:37.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.612 "adrfam": "ipv4", 00:27:37.612 "trsvcid": "$NVMF_PORT", 00:27:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.612 "hdgst": ${hdgst:-false}, 00:27:37.612 "ddgst": ${ddgst:-false} 00:27:37.612 }, 00:27:37.612 "method": "bdev_nvme_attach_controller" 00:27:37.612 } 00:27:37.612 EOF 00:27:37.612 )") 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.612 { 00:27:37.612 "params": { 00:27:37.612 "name": "Nvme$subsystem", 00:27:37.612 "trtype": "$TEST_TRANSPORT", 00:27:37.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.612 "adrfam": "ipv4", 00:27:37.612 "trsvcid": "$NVMF_PORT", 00:27:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.612 "hdgst": ${hdgst:-false}, 00:27:37.612 "ddgst": ${ddgst:-false} 00:27:37.612 }, 00:27:37.612 "method": "bdev_nvme_attach_controller" 00:27:37.612 } 00:27:37.612 EOF 00:27:37.612 )") 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.612 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:37.613 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:37.613 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:37.613 "params": { 00:27:37.613 "name": "Nvme1", 00:27:37.613 "trtype": "tcp", 00:27:37.613 "traddr": "10.0.0.2", 00:27:37.613 "adrfam": "ipv4", 00:27:37.613 "trsvcid": "4420", 00:27:37.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:37.613 "hdgst": false, 00:27:37.613 "ddgst": false 00:27:37.613 }, 00:27:37.613 "method": "bdev_nvme_attach_controller" 00:27:37.613 },{ 00:27:37.613 "params": { 00:27:37.613 "name": "Nvme2", 00:27:37.613 "trtype": "tcp", 00:27:37.613 "traddr": "10.0.0.2", 00:27:37.613 "adrfam": "ipv4", 00:27:37.613 "trsvcid": "4420", 00:27:37.613 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:37.613 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:37.613 "hdgst": false, 00:27:37.613 "ddgst": false 00:27:37.613 }, 00:27:37.613 "method": "bdev_nvme_attach_controller" 00:27:37.613 },{ 00:27:37.613 "params": { 00:27:37.613 "name": "Nvme3", 00:27:37.613 "trtype": "tcp", 00:27:37.613 "traddr": "10.0.0.2", 00:27:37.613 "adrfam": "ipv4", 00:27:37.613 "trsvcid": "4420", 00:27:37.613 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:37.613 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:37.613 "hdgst": false, 00:27:37.613 "ddgst": false 00:27:37.613 }, 00:27:37.613 "method": "bdev_nvme_attach_controller" 00:27:37.613 },{ 00:27:37.613 "params": { 00:27:37.613 "name": "Nvme4", 00:27:37.613 "trtype": "tcp", 00:27:37.613 "traddr": "10.0.0.2", 00:27:37.613 "adrfam": "ipv4", 00:27:37.613 "trsvcid": "4420", 00:27:37.613 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:37.613 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:37.613 "hdgst": false, 00:27:37.613 "ddgst": false 00:27:37.613 }, 00:27:37.613 "method": "bdev_nvme_attach_controller" 00:27:37.613 },{ 00:27:37.613 "params": { 00:27:37.613 "name": "Nvme5", 00:27:37.613 "trtype": "tcp", 00:27:37.613 "traddr": "10.0.0.2", 00:27:37.613 "adrfam": "ipv4", 00:27:37.613 "trsvcid": "4420", 00:27:37.613 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:37.613 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:37.613 "hdgst": false, 00:27:37.613 "ddgst": false 00:27:37.613 }, 00:27:37.613 "method": "bdev_nvme_attach_controller" 00:27:37.613 },{ 00:27:37.613 "params": { 00:27:37.613 "name": "Nvme6", 00:27:37.613 "trtype": "tcp", 00:27:37.613 "traddr": "10.0.0.2", 00:27:37.613 "adrfam": "ipv4", 00:27:37.613 "trsvcid": "4420", 00:27:37.613 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:37.613 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:37.613 "hdgst": false, 00:27:37.613 "ddgst": false 00:27:37.613 }, 00:27:37.613 "method": "bdev_nvme_attach_controller" 00:27:37.613 },{ 00:27:37.613 "params": { 00:27:37.613 "name": "Nvme7", 00:27:37.613 "trtype": "tcp", 00:27:37.613 "traddr": "10.0.0.2", 00:27:37.613 "adrfam": "ipv4", 00:27:37.613 "trsvcid": "4420", 00:27:37.613 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:37.613 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:37.613 "hdgst": false, 00:27:37.613 "ddgst": false 00:27:37.613 }, 00:27:37.613 "method": "bdev_nvme_attach_controller" 00:27:37.613 },{ 00:27:37.613 "params": { 00:27:37.613 "name": "Nvme8", 00:27:37.613 "trtype": "tcp", 00:27:37.613 "traddr": "10.0.0.2", 00:27:37.613 "adrfam": "ipv4", 00:27:37.613 "trsvcid": "4420", 00:27:37.613 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:37.613 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:37.613 "hdgst": false, 00:27:37.613 "ddgst": false 00:27:37.613 }, 00:27:37.613 "method": "bdev_nvme_attach_controller" 00:27:37.613 },{ 00:27:37.613 "params": { 00:27:37.613 "name": "Nvme9", 00:27:37.613 "trtype": "tcp", 00:27:37.613 "traddr": "10.0.0.2", 00:27:37.613 "adrfam": "ipv4", 00:27:37.613 "trsvcid": "4420", 00:27:37.613 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:37.613 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:37.613 "hdgst": false, 00:27:37.613 "ddgst": false 00:27:37.613 }, 00:27:37.613 "method": "bdev_nvme_attach_controller" 00:27:37.613 },{ 00:27:37.613 "params": { 00:27:37.613 "name": "Nvme10", 00:27:37.613 "trtype": "tcp", 00:27:37.613 "traddr": "10.0.0.2", 00:27:37.613 "adrfam": "ipv4", 00:27:37.613 "trsvcid": "4420", 00:27:37.613 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:37.613 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:37.613 "hdgst": false, 00:27:37.613 "ddgst": false 00:27:37.613 }, 00:27:37.613 "method": "bdev_nvme_attach_controller" 00:27:37.613 }' 00:27:37.613 [2024-07-25 05:47:31.215376] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:27:37.613 [2024-07-25 05:47:31.215467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707855 ] 00:27:37.613 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.613 [2024-07-25 05:47:31.280504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.871 [2024-07-25 05:47:31.370695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.245 Running I/O for 1 seconds... 00:27:40.650 00:27:40.650 Latency(us) 00:27:40.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.650 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.650 Verification LBA range: start 0x0 length 0x400 00:27:40.650 Nvme1n1 : 1.07 179.60 11.22 0.00 0.00 352794.04 37671.06 282727.16 00:27:40.650 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.650 Verification LBA range: start 0x0 length 0x400 00:27:40.650 Nvme2n1 : 1.13 230.30 14.39 0.00 0.00 264685.28 17185.00 264085.81 00:27:40.650 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.650 Verification LBA range: start 0x0 length 0x400 00:27:40.650 Nvme3n1 : 1.15 222.94 13.93 0.00 0.00 275112.96 18835.53 265639.25 00:27:40.650 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.650 Verification LBA range: start 0x0 length 0x400 00:27:40.650 Nvme4n1 : 1.17 273.48 17.09 0.00 0.00 220783.96 17379.18 260978.92 00:27:40.650 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.650 Verification LBA range: start 0x0 length 0x400 00:27:40.650 Nvme5n1 : 1.18 217.14 13.57 0.00 0.00 273812.29 20777.34 288940.94 00:27:40.650 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.650 Verification LBA range: start 0x0 length 0x400 00:27:40.650 Nvme6n1 : 1.16 220.86 13.80 0.00 0.00 264331.76 20194.80 268746.15 00:27:40.650 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.650 Verification LBA range: start 0x0 length 0x400 00:27:40.650 Nvme7n1 : 1.18 272.30 17.02 0.00 0.00 210664.87 16796.63 278066.82 00:27:40.650 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.650 Verification LBA range: start 0x0 length 0x400 00:27:40.650 Nvme8n1 : 1.14 223.91 13.99 0.00 0.00 251344.78 17961.72 260978.92 00:27:40.650 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.650 Verification LBA range: start 0x0 length 0x400 00:27:40.650 Nvme9n1 : 1.18 216.24 13.52 0.00 0.00 257323.61 21262.79 304475.40 00:27:40.650 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.651 Verification LBA range: start 0x0 length 0x400 00:27:40.651 Nvme10n1 : 1.17 222.69 13.92 0.00 0.00 244003.99 1389.61 265639.25 00:27:40.651 =================================================================================================================== 00:27:40.651 Total : 2279.46 142.47 0.00 0.00 257016.28 1389.61 304475.40 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:40.651 rmmod nvme_tcp 00:27:40.651 rmmod nvme_fabrics 00:27:40.651 rmmod nvme_keyring 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1707377 ']' 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1707377 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1707377 ']' 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1707377 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1707377 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1707377' 00:27:40.651 killing process with pid 1707377 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1707377 00:27:40.651 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1707377 00:27:41.216 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.216 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.216 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.216 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.216 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.216 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.216 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.216 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:43.116 00:27:43.116 real 0m11.569s 00:27:43.116 user 0m33.363s 00:27:43.116 sys 0m3.190s 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.116 ************************************ 00:27:43.116 END TEST nvmf_shutdown_tc1 00:27:43.116 ************************************ 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:43.116 ************************************ 00:27:43.116 START TEST nvmf_shutdown_tc2 00:27:43.116 ************************************ 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:43.116 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:43.117 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:43.117 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:43.117 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:43.117 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.117 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.118 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:43.118 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.118 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.118 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:43.118 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:43.118 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.118 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:43.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:27:43.376 00:27:43.376 --- 10.0.0.2 ping statistics --- 00:27:43.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.376 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:27:43.376 00:27:43.376 --- 10.0.0.1 ping statistics --- 00:27:43.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.376 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1708622 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1708622 00:27:43.376 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1708622 ']' 00:27:43.377 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.377 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:43.377 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.377 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:43.377 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.377 [2024-07-25 05:47:37.021165] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:27:43.377 [2024-07-25 05:47:37.021264] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.377 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.635 [2024-07-25 05:47:37.088005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.635 [2024-07-25 05:47:37.176635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.635 [2024-07-25 05:47:37.176696] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.635 [2024-07-25 05:47:37.176726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.635 [2024-07-25 05:47:37.176738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.635 [2024-07-25 05:47:37.176749] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.635 [2024-07-25 05:47:37.176881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.635 [2024-07-25 05:47:37.176945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:43.635 [2024-07-25 05:47:37.177011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:43.635 [2024-07-25 05:47:37.177014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.635 [2024-07-25 05:47:37.327611] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:43.635 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.893 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:43.894 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.894 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:43.894 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:43.894 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:43.894 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:43.894 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.894 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.894 Malloc1 00:27:43.894 [2024-07-25 05:47:37.411089] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.894 Malloc2 00:27:43.894 Malloc3 00:27:43.894 Malloc4 00:27:43.894 Malloc5 00:27:44.152 Malloc6 00:27:44.152 Malloc7 00:27:44.152 Malloc8 00:27:44.152 Malloc9 00:27:44.152 Malloc10 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1708802 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1708802 /var/tmp/bdevperf.sock 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1708802 ']' 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:44.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.410 { 00:27:44.410 "params": { 00:27:44.410 "name": "Nvme$subsystem", 00:27:44.410 "trtype": "$TEST_TRANSPORT", 00:27:44.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.410 "adrfam": "ipv4", 00:27:44.410 "trsvcid": "$NVMF_PORT", 00:27:44.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.410 "hdgst": ${hdgst:-false}, 00:27:44.410 "ddgst": ${ddgst:-false} 00:27:44.410 }, 00:27:44.410 "method": "bdev_nvme_attach_controller" 00:27:44.410 } 00:27:44.410 EOF 00:27:44.410 )") 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.410 { 00:27:44.410 "params": { 00:27:44.410 "name": "Nvme$subsystem", 00:27:44.410 "trtype": "$TEST_TRANSPORT", 00:27:44.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.410 "adrfam": "ipv4", 00:27:44.410 "trsvcid": "$NVMF_PORT", 00:27:44.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.410 "hdgst": ${hdgst:-false}, 00:27:44.410 "ddgst": ${ddgst:-false} 00:27:44.410 }, 00:27:44.410 "method": "bdev_nvme_attach_controller" 00:27:44.410 } 00:27:44.410 EOF 00:27:44.410 )") 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.410 { 00:27:44.410 "params": { 00:27:44.410 "name": "Nvme$subsystem", 00:27:44.410 "trtype": "$TEST_TRANSPORT", 00:27:44.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.410 "adrfam": "ipv4", 00:27:44.410 "trsvcid": "$NVMF_PORT", 00:27:44.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.410 "hdgst": ${hdgst:-false}, 00:27:44.410 "ddgst": ${ddgst:-false} 00:27:44.410 }, 00:27:44.410 "method": "bdev_nvme_attach_controller" 00:27:44.410 } 00:27:44.410 EOF 00:27:44.410 )") 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.410 { 00:27:44.410 "params": { 00:27:44.410 "name": "Nvme$subsystem", 00:27:44.410 "trtype": "$TEST_TRANSPORT", 00:27:44.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.410 "adrfam": "ipv4", 00:27:44.410 "trsvcid": "$NVMF_PORT", 00:27:44.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.410 "hdgst": ${hdgst:-false}, 00:27:44.410 "ddgst": ${ddgst:-false} 00:27:44.410 }, 00:27:44.410 "method": "bdev_nvme_attach_controller" 00:27:44.410 } 00:27:44.410 EOF 00:27:44.410 )") 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.410 { 00:27:44.410 "params": { 00:27:44.410 "name": "Nvme$subsystem", 00:27:44.410 "trtype": "$TEST_TRANSPORT", 00:27:44.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.410 "adrfam": "ipv4", 00:27:44.410 "trsvcid": "$NVMF_PORT", 00:27:44.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.410 "hdgst": ${hdgst:-false}, 00:27:44.410 "ddgst": ${ddgst:-false} 00:27:44.410 }, 00:27:44.410 "method": "bdev_nvme_attach_controller" 00:27:44.410 } 00:27:44.410 EOF 00:27:44.410 )") 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.410 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.410 { 00:27:44.410 "params": { 00:27:44.410 "name": "Nvme$subsystem", 00:27:44.410 "trtype": "$TEST_TRANSPORT", 00:27:44.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.410 "adrfam": "ipv4", 00:27:44.410 "trsvcid": "$NVMF_PORT", 00:27:44.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.410 "hdgst": ${hdgst:-false}, 00:27:44.410 "ddgst": ${ddgst:-false} 00:27:44.410 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 } 00:27:44.411 EOF 00:27:44.411 )") 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.411 { 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme$subsystem", 00:27:44.411 "trtype": "$TEST_TRANSPORT", 00:27:44.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "$NVMF_PORT", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.411 "hdgst": ${hdgst:-false}, 00:27:44.411 "ddgst": ${ddgst:-false} 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 } 00:27:44.411 EOF 00:27:44.411 )") 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.411 { 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme$subsystem", 00:27:44.411 "trtype": "$TEST_TRANSPORT", 00:27:44.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "$NVMF_PORT", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.411 "hdgst": ${hdgst:-false}, 00:27:44.411 "ddgst": ${ddgst:-false} 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 } 00:27:44.411 EOF 00:27:44.411 )") 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.411 { 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme$subsystem", 00:27:44.411 "trtype": "$TEST_TRANSPORT", 00:27:44.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "$NVMF_PORT", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.411 "hdgst": ${hdgst:-false}, 00:27:44.411 "ddgst": ${ddgst:-false} 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 } 00:27:44.411 EOF 00:27:44.411 )") 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.411 { 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme$subsystem", 00:27:44.411 "trtype": "$TEST_TRANSPORT", 00:27:44.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "$NVMF_PORT", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.411 "hdgst": ${hdgst:-false}, 00:27:44.411 "ddgst": ${ddgst:-false} 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 } 00:27:44.411 EOF 00:27:44.411 )") 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:44.411 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme1", 00:27:44.411 "trtype": "tcp", 00:27:44.411 "traddr": "10.0.0.2", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "4420", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:44.411 "hdgst": false, 00:27:44.411 "ddgst": false 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 },{ 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme2", 00:27:44.411 "trtype": "tcp", 00:27:44.411 "traddr": "10.0.0.2", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "4420", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:44.411 "hdgst": false, 00:27:44.411 "ddgst": false 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 },{ 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme3", 00:27:44.411 "trtype": "tcp", 00:27:44.411 "traddr": "10.0.0.2", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "4420", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:44.411 "hdgst": false, 00:27:44.411 "ddgst": false 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 },{ 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme4", 00:27:44.411 "trtype": "tcp", 00:27:44.411 "traddr": "10.0.0.2", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "4420", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:44.411 "hdgst": false, 00:27:44.411 "ddgst": false 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 },{ 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme5", 00:27:44.411 "trtype": "tcp", 00:27:44.411 "traddr": "10.0.0.2", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "4420", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:44.411 "hdgst": false, 00:27:44.411 "ddgst": false 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 },{ 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme6", 00:27:44.411 "trtype": "tcp", 00:27:44.411 "traddr": "10.0.0.2", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "4420", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:44.411 "hdgst": false, 00:27:44.411 "ddgst": false 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 },{ 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme7", 00:27:44.411 "trtype": "tcp", 00:27:44.411 "traddr": "10.0.0.2", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "4420", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:44.411 "hdgst": false, 00:27:44.411 "ddgst": false 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 },{ 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme8", 00:27:44.411 "trtype": "tcp", 00:27:44.411 "traddr": "10.0.0.2", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "4420", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:44.411 "hdgst": false, 00:27:44.411 "ddgst": false 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 },{ 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme9", 00:27:44.411 "trtype": "tcp", 00:27:44.411 "traddr": "10.0.0.2", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "4420", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:44.411 "hdgst": false, 00:27:44.411 "ddgst": false 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 },{ 00:27:44.411 "params": { 00:27:44.411 "name": "Nvme10", 00:27:44.411 "trtype": "tcp", 00:27:44.411 "traddr": "10.0.0.2", 00:27:44.411 "adrfam": "ipv4", 00:27:44.411 "trsvcid": "4420", 00:27:44.411 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:44.411 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:44.411 "hdgst": false, 00:27:44.411 "ddgst": false 00:27:44.411 }, 00:27:44.411 "method": "bdev_nvme_attach_controller" 00:27:44.411 }' 00:27:44.411 [2024-07-25 05:47:37.918684] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:27:44.411 [2024-07-25 05:47:37.918760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1708802 ] 00:27:44.411 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.411 [2024-07-25 05:47:37.982237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.411 [2024-07-25 05:47:38.068457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.307 Running I/O for 10 seconds... 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:46.307 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1708802 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1708802 ']' 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1708802 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1708802 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1708802' 00:27:46.564 killing process with pid 1708802 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1708802 00:27:46.564 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1708802 00:27:46.821 Received shutdown signal, test time was about 0.870575 seconds 00:27:46.821 00:27:46.821 Latency(us) 00:27:46.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.821 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:46.821 Verification LBA range: start 0x0 length 0x400 00:27:46.821 Nvme1n1 : 0.79 244.13 15.26 0.00 0.00 258385.04 19418.07 234570.33 00:27:46.821 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:46.821 Verification LBA range: start 0x0 length 0x400 00:27:46.821 Nvme2n1 : 0.81 176.83 11.05 0.00 0.00 339188.09 11990.66 271853.04 00:27:46.821 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:46.821 Verification LBA range: start 0x0 length 0x400 00:27:46.821 Nvme3n1 : 0.87 294.35 18.40 0.00 0.00 194924.09 9077.95 242337.56 00:27:46.821 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:46.821 Verification LBA range: start 0x0 length 0x400 00:27:46.821 Nvme4n1 : 0.83 309.87 19.37 0.00 0.00 189867.99 17476.27 217482.43 00:27:46.821 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:46.821 Verification LBA range: start 0x0 length 0x400 00:27:46.821 Nvme5n1 : 0.81 236.12 14.76 0.00 0.00 243099.88 19903.53 257872.02 00:27:46.821 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:46.821 Verification LBA range: start 0x0 length 0x400 00:27:46.821 Nvme6n1 : 0.79 161.14 10.07 0.00 0.00 346292.53 23690.05 310689.19 00:27:46.821 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:46.821 Verification LBA range: start 0x0 length 0x400 00:27:46.821 Nvme7n1 : 0.80 240.03 15.00 0.00 0.00 226038.27 20680.25 254765.13 00:27:46.821 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:46.821 Verification LBA range: start 0x0 length 0x400 00:27:46.821 Nvme8n1 : 0.82 234.76 14.67 0.00 0.00 226477.95 17185.00 260978.92 00:27:46.821 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:46.821 Verification LBA range: start 0x0 length 0x400 00:27:46.821 Nvme9n1 : 0.82 233.15 14.57 0.00 0.00 222531.89 19709.35 259425.47 00:27:46.821 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:46.821 Verification LBA range: start 0x0 length 0x400 00:27:46.821 Nvme10n1 : 0.80 159.14 9.95 0.00 0.00 315802.36 38447.79 315349.52 00:27:46.821 =================================================================================================================== 00:27:46.821 Total : 2289.52 143.10 0.00 0.00 244603.50 9077.95 315349.52 00:27:47.078 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1708622 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:48.010 rmmod nvme_tcp 00:27:48.010 rmmod nvme_fabrics 00:27:48.010 rmmod nvme_keyring 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1708622 ']' 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1708622 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1708622 ']' 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1708622 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:48.010 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1708622 00:27:48.267 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:48.267 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:48.267 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1708622' 00:27:48.267 killing process with pid 1708622 00:27:48.267 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1708622 00:27:48.267 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1708622 00:27:48.526 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:48.526 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:48.526 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:48.526 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:48.526 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:48.526 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.526 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.526 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:51.053 00:27:51.053 real 0m7.433s 00:27:51.053 user 0m21.869s 00:27:51.053 sys 0m1.502s 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.053 ************************************ 00:27:51.053 END TEST nvmf_shutdown_tc2 00:27:51.053 ************************************ 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:51.053 ************************************ 00:27:51.053 START TEST nvmf_shutdown_tc3 00:27:51.053 ************************************ 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:51.053 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:51.053 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.053 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:51.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:51.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:51.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:27:51.054 00:27:51.054 --- 10.0.0.2 ping statistics --- 00:27:51.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.054 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:27:51.054 00:27:51.054 --- 10.0.0.1 ping statistics --- 00:27:51.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.054 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1709702 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1709702 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1709702 ']' 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:51.054 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.054 [2024-07-25 05:47:44.490618] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:27:51.054 [2024-07-25 05:47:44.490715] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.054 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.054 [2024-07-25 05:47:44.564628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.054 [2024-07-25 05:47:44.653725] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.054 [2024-07-25 05:47:44.653782] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.054 [2024-07-25 05:47:44.653811] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.054 [2024-07-25 05:47:44.653822] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.054 [2024-07-25 05:47:44.653832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.054 [2024-07-25 05:47:44.657262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.054 [2024-07-25 05:47:44.657334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.054 [2024-07-25 05:47:44.661366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:51.054 [2024-07-25 05:47:44.661373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.312 [2024-07-25 05:47:44.824692] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.312 05:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.312 Malloc1 00:27:51.312 [2024-07-25 05:47:44.914084] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.312 Malloc2 00:27:51.312 Malloc3 00:27:51.571 Malloc4 00:27:51.571 Malloc5 00:27:51.571 Malloc6 00:27:51.571 Malloc7 00:27:51.571 Malloc8 00:27:51.830 Malloc9 00:27:51.830 Malloc10 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1709882 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1709882 /var/tmp/bdevperf.sock 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1709882 ']' 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:51.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.830 { 00:27:51.830 "params": { 00:27:51.830 "name": "Nvme$subsystem", 00:27:51.830 "trtype": "$TEST_TRANSPORT", 00:27:51.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.830 "adrfam": "ipv4", 00:27:51.830 "trsvcid": "$NVMF_PORT", 00:27:51.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.830 "hdgst": ${hdgst:-false}, 00:27:51.830 "ddgst": ${ddgst:-false} 00:27:51.830 }, 00:27:51.830 "method": "bdev_nvme_attach_controller" 00:27:51.830 } 00:27:51.830 EOF 00:27:51.830 )") 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.830 { 00:27:51.830 "params": { 00:27:51.830 "name": "Nvme$subsystem", 00:27:51.830 "trtype": "$TEST_TRANSPORT", 00:27:51.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.830 "adrfam": "ipv4", 00:27:51.830 "trsvcid": "$NVMF_PORT", 00:27:51.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.830 "hdgst": ${hdgst:-false}, 00:27:51.830 "ddgst": ${ddgst:-false} 00:27:51.830 }, 00:27:51.830 "method": "bdev_nvme_attach_controller" 00:27:51.830 } 00:27:51.830 EOF 00:27:51.830 )") 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.830 { 00:27:51.830 "params": { 00:27:51.830 "name": "Nvme$subsystem", 00:27:51.830 "trtype": "$TEST_TRANSPORT", 00:27:51.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.830 "adrfam": "ipv4", 00:27:51.830 "trsvcid": "$NVMF_PORT", 00:27:51.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.830 "hdgst": ${hdgst:-false}, 00:27:51.830 "ddgst": ${ddgst:-false} 00:27:51.830 }, 00:27:51.830 "method": "bdev_nvme_attach_controller" 00:27:51.830 } 00:27:51.830 EOF 00:27:51.830 )") 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.830 { 00:27:51.830 "params": { 00:27:51.830 "name": "Nvme$subsystem", 00:27:51.830 "trtype": "$TEST_TRANSPORT", 00:27:51.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.830 "adrfam": "ipv4", 00:27:51.830 "trsvcid": "$NVMF_PORT", 00:27:51.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.830 "hdgst": ${hdgst:-false}, 00:27:51.830 "ddgst": ${ddgst:-false} 00:27:51.830 }, 00:27:51.830 "method": "bdev_nvme_attach_controller" 00:27:51.830 } 00:27:51.830 EOF 00:27:51.830 )") 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.830 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.830 { 00:27:51.830 "params": { 00:27:51.830 "name": "Nvme$subsystem", 00:27:51.830 "trtype": "$TEST_TRANSPORT", 00:27:51.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.830 "adrfam": "ipv4", 00:27:51.830 "trsvcid": "$NVMF_PORT", 00:27:51.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.830 "hdgst": ${hdgst:-false}, 00:27:51.830 "ddgst": ${ddgst:-false} 00:27:51.830 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 } 00:27:51.831 EOF 00:27:51.831 )") 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.831 { 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme$subsystem", 00:27:51.831 "trtype": "$TEST_TRANSPORT", 00:27:51.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "$NVMF_PORT", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.831 "hdgst": ${hdgst:-false}, 00:27:51.831 "ddgst": ${ddgst:-false} 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 } 00:27:51.831 EOF 00:27:51.831 )") 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.831 { 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme$subsystem", 00:27:51.831 "trtype": "$TEST_TRANSPORT", 00:27:51.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "$NVMF_PORT", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.831 "hdgst": ${hdgst:-false}, 00:27:51.831 "ddgst": ${ddgst:-false} 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 } 00:27:51.831 EOF 00:27:51.831 )") 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.831 { 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme$subsystem", 00:27:51.831 "trtype": "$TEST_TRANSPORT", 00:27:51.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "$NVMF_PORT", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.831 "hdgst": ${hdgst:-false}, 00:27:51.831 "ddgst": ${ddgst:-false} 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 } 00:27:51.831 EOF 00:27:51.831 )") 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.831 { 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme$subsystem", 00:27:51.831 "trtype": "$TEST_TRANSPORT", 00:27:51.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "$NVMF_PORT", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.831 "hdgst": ${hdgst:-false}, 00:27:51.831 "ddgst": ${ddgst:-false} 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 } 00:27:51.831 EOF 00:27:51.831 )") 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.831 { 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme$subsystem", 00:27:51.831 "trtype": "$TEST_TRANSPORT", 00:27:51.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "$NVMF_PORT", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.831 "hdgst": ${hdgst:-false}, 00:27:51.831 "ddgst": ${ddgst:-false} 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 } 00:27:51.831 EOF 00:27:51.831 )") 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:51.831 05:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme1", 00:27:51.831 "trtype": "tcp", 00:27:51.831 "traddr": "10.0.0.2", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "4420", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:51.831 "hdgst": false, 00:27:51.831 "ddgst": false 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 },{ 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme2", 00:27:51.831 "trtype": "tcp", 00:27:51.831 "traddr": "10.0.0.2", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "4420", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:51.831 "hdgst": false, 00:27:51.831 "ddgst": false 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 },{ 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme3", 00:27:51.831 "trtype": "tcp", 00:27:51.831 "traddr": "10.0.0.2", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "4420", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:51.831 "hdgst": false, 00:27:51.831 "ddgst": false 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 },{ 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme4", 00:27:51.831 "trtype": "tcp", 00:27:51.831 "traddr": "10.0.0.2", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "4420", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:51.831 "hdgst": false, 00:27:51.831 "ddgst": false 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 },{ 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme5", 00:27:51.831 "trtype": "tcp", 00:27:51.831 "traddr": "10.0.0.2", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "4420", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:51.831 "hdgst": false, 00:27:51.831 "ddgst": false 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 },{ 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme6", 00:27:51.831 "trtype": "tcp", 00:27:51.831 "traddr": "10.0.0.2", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "4420", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:51.831 "hdgst": false, 00:27:51.831 "ddgst": false 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 },{ 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme7", 00:27:51.831 "trtype": "tcp", 00:27:51.831 "traddr": "10.0.0.2", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "4420", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:51.831 "hdgst": false, 00:27:51.831 "ddgst": false 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 },{ 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme8", 00:27:51.831 "trtype": "tcp", 00:27:51.831 "traddr": "10.0.0.2", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "4420", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:51.831 "hdgst": false, 00:27:51.831 "ddgst": false 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 },{ 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme9", 00:27:51.831 "trtype": "tcp", 00:27:51.831 "traddr": "10.0.0.2", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "4420", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:51.831 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:51.831 "hdgst": false, 00:27:51.831 "ddgst": false 00:27:51.831 }, 00:27:51.831 "method": "bdev_nvme_attach_controller" 00:27:51.831 },{ 00:27:51.831 "params": { 00:27:51.831 "name": "Nvme10", 00:27:51.831 "trtype": "tcp", 00:27:51.831 "traddr": "10.0.0.2", 00:27:51.831 "adrfam": "ipv4", 00:27:51.831 "trsvcid": "4420", 00:27:51.831 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:51.832 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:51.832 "hdgst": false, 00:27:51.832 "ddgst": false 00:27:51.832 }, 00:27:51.832 "method": "bdev_nvme_attach_controller" 00:27:51.832 }' 00:27:51.832 [2024-07-25 05:47:45.422036] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:27:51.832 [2024-07-25 05:47:45.422113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709882 ] 00:27:51.832 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.832 [2024-07-25 05:47:45.485808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.089 [2024-07-25 05:47:45.573012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.460 Running I/O for 10 seconds... 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.728 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.986 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.987 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:53.987 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:53.987 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:54.245 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:54.245 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:54.245 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:54.245 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:54.245 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.245 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:54.245 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.245 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:54.245 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:54.245 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:54.520 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:54.520 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:54.520 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:54.520 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:54.520 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.520 05:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1709702 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1709702 ']' 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1709702 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1709702 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1709702' 00:27:54.520 killing process with pid 1709702 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1709702 00:27:54.520 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1709702 00:27:54.520 [2024-07-25 05:47:48.059094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.520 [2024-07-25 05:47:48.059863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.059875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.059892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.059905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.059918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.059931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.059943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.059956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.059968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.059981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.059993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.060006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.060019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8550 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.061995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.062007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.062019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.062031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.062044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.062056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.062069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.062081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.062093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.062105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.062117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.062130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.062142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabe20 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.064991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.065025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.065041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.065054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.065066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.065079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.065092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.521 [2024-07-25 05:47:48.065105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.065831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8ed0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.066180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.522 [2024-07-25 05:47:48.066223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.522 [2024-07-25 05:47:48.066248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.522 [2024-07-25 05:47:48.066266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.522 [2024-07-25 05:47:48.066280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.522 [2024-07-25 05:47:48.066294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.522 [2024-07-25 05:47:48.066310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.522 [2024-07-25 05:47:48.066323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.522 [2024-07-25 05:47:48.066336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20cd0 is same with the state(5) to be set 00:27:54.522 [2024-07-25 05:47:48.066411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.522 [2024-07-25 05:47:48.066433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.522 [2024-07-25 05:47:48.066447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.522 [2024-07-25 05:47:48.066461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.522 [2024-07-25 05:47:48.066475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.522 [2024-07-25 05:47:48.066488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.522 [2024-07-25 05:47:48.066502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.522 [2024-07-25 05:47:48.066516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.522 [2024-07-25 05:47:48.066528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d940 is same with the state(5) to be set 00:27:54.523 [2024-07-25 05:47:48.066630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.523 [2024-07-25 05:47:48.066651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.066666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.523 [2024-07-25 05:47:48.066680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.066693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.523 [2024-07-25 05:47:48.066707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.066722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.523 [2024-07-25 05:47:48.066741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.066755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18953c0 is same with the state(5) to be set 00:27:54.523 [2024-07-25 05:47:48.066800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.523 [2024-07-25 05:47:48.066820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.066835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.523 [2024-07-25 05:47:48.066849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.066863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.523 [2024-07-25 05:47:48.066877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.066892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.523 [2024-07-25 05:47:48.066906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.066919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1454ee0 is same with the state(5) to be set 00:27:54.523 [2024-07-25 05:47:48.067285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.067974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.067989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.068003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.068019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.068033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.068048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.068062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.068077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.068092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.068107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.068122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.068138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.068152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.068167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.068181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.523 [2024-07-25 05:47:48.068196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.523 [2024-07-25 05:47:48.068210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 05:47:48.068425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 he state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with t[2024-07-25 05:47:48.068514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:12he state(5) to be set 00:27:54.524 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with t[2024-07-25 05:47:48.068531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.524 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128[2024-07-25 05:47:48.068655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 he state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 05:47:48.068671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 he state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 [2024-07-25 05:47:48.068724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.524 [2024-07-25 05:47:48.068737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:12[2024-07-25 05:47:48.068750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.524 he state(5) to be set 00:27:54.524 [2024-07-25 05:47:48.068765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 05:47:48.068766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 he state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.068798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 [2024-07-25 05:47:48.068811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.068824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 [2024-07-25 05:47:48.068837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12[2024-07-25 05:47:48.068851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 he state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 05:47:48.068865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 he state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.068893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 [2024-07-25 05:47:48.068907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.068920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 [2024-07-25 05:47:48.068933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:12[2024-07-25 05:47:48.068946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 he state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 05:47:48.068961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 he state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.068992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.068996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 [2024-07-25 05:47:48.069005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.069018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 [2024-07-25 05:47:48.069031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12[2024-07-25 05:47:48.069044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 he state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 05:47:48.069059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 he state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.069087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 [2024-07-25 05:47:48.069100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.069113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 [2024-07-25 05:47:48.069126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with t[2024-07-25 05:47:48.069138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:12he state(5) to be set 00:27:54.525 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.069153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with t[2024-07-25 05:47:48.069154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.525 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 [2024-07-25 05:47:48.069170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.069184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 [2024-07-25 05:47:48.069196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.069209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 05:47:48.069222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 he state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with t[2024-07-25 05:47:48.069238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:12he state(5) to be set 00:27:54.525 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.069261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with t[2024-07-25 05:47:48.069262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.525 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 [2024-07-25 05:47:48.069277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.069301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with t[2024-07-25 05:47:48.069302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.525 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 [2024-07-25 05:47:48.069316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.525 [2024-07-25 05:47:48.069329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.525 [2024-07-25 05:47:48.069334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.525 [2024-07-25 05:47:48.069342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c93b0 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.069376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.526 [2024-07-25 05:47:48.069454] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1459440 was disconnected and freed. reset controller. 00:27:54.526 [2024-07-25 05:47:48.071089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.071991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.072005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.072017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9870 is same with the state(5) to be set 00:27:54.526 [2024-07-25 05:47:48.072488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.072973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.072989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.073002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.073018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.073032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.526 [2024-07-25 05:47:48.073048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.526 [2024-07-25 05:47:48.073062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t[2024-07-25 05:47:48.073203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1he state(5) to be set 00:27:54.527 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t[2024-07-25 05:47:48.073308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:1he state(5) to be set 00:27:54.527 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t[2024-07-25 05:47:48.073325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.527 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t[2024-07-25 05:47:48.073432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1he state(5) to be set 00:27:54.527 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t[2024-07-25 05:47:48.073449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.527 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1[2024-07-25 05:47:48.073502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 he state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t[2024-07-25 05:47:48.073516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.527 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t[2024-07-25 05:47:48.073594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1he state(5) to be set 00:27:54.527 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t[2024-07-25 05:47:48.073610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.527 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 [2024-07-25 05:47:48.073664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.527 [2024-07-25 05:47:48.073677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1[2024-07-25 05:47:48.073690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.527 he state(5) to be set 00:27:54.527 [2024-07-25 05:47:48.073706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.073708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1[2024-07-25 05:47:48.073722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 he state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 05:47:48.073738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 he state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.073767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.073780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.073793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.073806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.073820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.073833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1[2024-07-25 05:47:48.073846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 he state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 05:47:48.073861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 he state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.073889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t[2024-07-25 05:47:48.073893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.528 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.073907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.073920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.073933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.073946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.073959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1[2024-07-25 05:47:48.073972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 he state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.073986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 05:47:48.073987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 he state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.074003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.074004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9d30 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.074018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.528 [2024-07-25 05:47:48.074483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.528 [2024-07-25 05:47:48.074575] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a0c3b0 was disconnected and freed. reset controller. 00:27:54.528 [2024-07-25 05:47:48.075358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:54.528 [2024-07-25 05:47:48.075401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188d940 (9): Bad file descriptor 00:27:54.528 [2024-07-25 05:47:48.075733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.075762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.075785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.075807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.528 [2024-07-25 05:47:48.075828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.075850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.075872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.075894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.075915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.075929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.075942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.075954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.075968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.075980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.075993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.529 [2024-07-25 05:47:48.076414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.076646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca1f0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.077486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:54.530 [2024-07-25 05:47:48.077549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194a4b0 (9): Bad file descriptor 00:27:54.530 [2024-07-25 05:47:48.077597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a20cd0 (9): Bad file descriptor 00:27:54.530 [2024-07-25 05:47:48.077669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.077692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.077707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.077721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.077735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.077750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.077764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.077778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.077791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898ff0 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.077834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.077854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.077869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.077883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.077897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.077911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.077930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.077945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.077957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944f20 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.078003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.078023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.078038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.078053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.078067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.078080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.078094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.078108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.078121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145bf90 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.078167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.078187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.078203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.078217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.078231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.078252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.078269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.530 [2024-07-25 05:47:48.078283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.078305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383610 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.078336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18953c0 (9): Bad file descriptor 00:27:54.530 [2024-07-25 05:47:48.078365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1454ee0 (9): Bad file descriptor 00:27:54.530 [2024-07-25 05:47:48.079007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.079034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.079049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.079053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.530 [2024-07-25 05:47:48.079071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.079079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.079085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.079099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.079101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.530 [2024-07-25 05:47:48.079112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.079117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.079125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.079134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.530 [2024-07-25 05:47:48.079139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.079148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.079152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.079165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1[2024-07-25 05:47:48.079166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.530 he state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.079181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t[2024-07-25 05:47:48.079182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.530 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.530 [2024-07-25 05:47:48.079196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.530 [2024-07-25 05:47:48.079200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.530 [2024-07-25 05:47:48.079210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1[2024-07-25 05:47:48.079274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 he state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t[2024-07-25 05:47:48.079342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1he state(5) to be set 00:27:54.531 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t[2024-07-25 05:47:48.079359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.531 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t[2024-07-25 05:47:48.079438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1he state(5) to be set 00:27:54.531 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-07-25 05:47:48.079478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 he state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t[2024-07-25 05:47:48.079494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.531 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t[2024-07-25 05:47:48.079637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1he state(5) to be set 00:27:54.531 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t[2024-07-25 05:47:48.079653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.531 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t[2024-07-25 05:47:48.079735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1he state(5) to be set 00:27:54.531 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t[2024-07-25 05:47:48.079751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:54.531 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 [2024-07-25 05:47:48.079804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.531 [2024-07-25 05:47:48.079817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.531 [2024-07-25 05:47:48.079830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1[2024-07-25 05:47:48.079830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.531 he state(5) to be set 00:27:54.532 [2024-07-25 05:47:48.079846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 05:47:48.079846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 he state(5) to be set 00:27:54.532 [2024-07-25 05:47:48.079862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.532 [2024-07-25 05:47:48.079865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.079875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.532 [2024-07-25 05:47:48.079883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.079888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with the state(5) to be set 00:27:54.532 [2024-07-25 05:47:48.079899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1[2024-07-25 05:47:48.079900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 he state(5) to be set 00:27:54.532 [2024-07-25 05:47:48.079915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 05:47:48.079916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 he state(5) to be set 00:27:54.532 [2024-07-25 05:47:48.079933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.079948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.079966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.079990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.532 [2024-07-25 05:47:48.080955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.532 [2024-07-25 05:47:48.080970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.080984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.081003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.081017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.081032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.081047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.081062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.081076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.081091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.081105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.081188] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x233f870 was disconnected and freed. reset controller. 00:27:54.533 [2024-07-25 05:47:48.081340] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:54.533 [2024-07-25 05:47:48.081539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.533 [2024-07-25 05:47:48.081576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x188d940 with addr=10.0.0.2, port=4420 00:27:54.533 [2024-07-25 05:47:48.081592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d940 is same with the state(5) to be set 00:27:54.533 [2024-07-25 05:47:48.081680] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:54.533 [2024-07-25 05:47:48.081766] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:54.533 [2024-07-25 05:47:48.083389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:54.533 [2024-07-25 05:47:48.083423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145bf90 (9): Bad file descriptor 00:27:54.533 [2024-07-25 05:47:48.083560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.533 [2024-07-25 05:47:48.083585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x194a4b0 with addr=10.0.0.2, port=4420 00:27:54.533 [2024-07-25 05:47:48.083600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194a4b0 is same with the state(5) to be set 00:27:54.533 [2024-07-25 05:47:48.083619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188d940 (9): Bad file descriptor 00:27:54.533 [2024-07-25 05:47:48.083723] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:54.533 [2024-07-25 05:47:48.083802] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:54.533 [2024-07-25 05:47:48.083980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194a4b0 (9): Bad file descriptor 00:27:54.533 [2024-07-25 05:47:48.084006] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:54.533 [2024-07-25 05:47:48.084020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:54.533 [2024-07-25 05:47:48.084037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:54.533 [2024-07-25 05:47:48.084505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.533 [2024-07-25 05:47:48.084648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.533 [2024-07-25 05:47:48.084675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x145bf90 with addr=10.0.0.2, port=4420 00:27:54.533 [2024-07-25 05:47:48.084696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145bf90 is same with the state(5) to be set 00:27:54.533 [2024-07-25 05:47:48.084711] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:54.533 [2024-07-25 05:47:48.084723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:54.533 [2024-07-25 05:47:48.084736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:54.533 [2024-07-25 05:47:48.084822] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:54.533 [2024-07-25 05:47:48.084896] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:54.533 [2024-07-25 05:47:48.084929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.533 [2024-07-25 05:47:48.084950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145bf90 (9): Bad file descriptor 00:27:54.533 [2024-07-25 05:47:48.085032] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:54.533 [2024-07-25 05:47:48.085052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:54.533 [2024-07-25 05:47:48.085065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:54.533 [2024-07-25 05:47:48.085120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.533 [2024-07-25 05:47:48.087591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.533 [2024-07-25 05:47:48.087627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.087656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.533 [2024-07-25 05:47:48.087681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.087701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.533 [2024-07-25 05:47:48.087715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.087729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.533 [2024-07-25 05:47:48.087742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.087756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2bb50 is same with the state(5) to be set 00:27:54.533 [2024-07-25 05:47:48.087789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1898ff0 (9): Bad file descriptor 00:27:54.533 [2024-07-25 05:47:48.087820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1944f20 (9): Bad file descriptor 00:27:54.533 [2024-07-25 05:47:48.087852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383610 (9): Bad file descriptor 00:27:54.533 [2024-07-25 05:47:48.088007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.533 [2024-07-25 05:47:48.088459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.533 [2024-07-25 05:47:48.088475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.088978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.088994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.534 [2024-07-25 05:47:48.089470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.534 [2024-07-25 05:47:48.089486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.089975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.089989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.090003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458140 is same with the state(5) to be set 00:27:54.535 [2024-07-25 05:47:48.091334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.091977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.535 [2024-07-25 05:47:48.091993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.535 [2024-07-25 05:47:48.092007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.092984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.092999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.093013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.093028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.093042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.093058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.093072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.093087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.093101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.093116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.093130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.093146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.093160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.536 [2024-07-25 05:47:48.093175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.536 [2024-07-25 05:47:48.093189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.093205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.093219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.093234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.093254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.093274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.093288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.093303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0a480 is same with the state(5) to be set 00:27:54.537 [2024-07-25 05:47:48.094616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.094639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.094660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.094676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.094693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.094707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.094724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.094738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.094753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.094767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.094783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.094797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.094812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.094827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.094842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.094856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.094872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.094886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.094901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.094915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.094932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.094946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.094966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.094981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.094998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.537 [2024-07-25 05:47:48.095677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.537 [2024-07-25 05:47:48.095691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.095707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.095721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.095741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.095755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.095771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.095785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.095800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.095814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.095830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.095844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.095860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.095874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.095890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.095904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.095920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.095934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.095950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.095964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.095979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.095993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.096572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.096586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1986130 is same with the state(5) to be set 00:27:54.538 [2024-07-25 05:47:48.099019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:54.538 [2024-07-25 05:47:48.099053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:54.538 [2024-07-25 05:47:48.099071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:54.538 [2024-07-25 05:47:48.099196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2bb50 (9): Bad file descriptor 00:27:54.538 [2024-07-25 05:47:48.099264] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:54.538 [2024-07-25 05:47:48.099372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:54.538 [2024-07-25 05:47:48.099691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.538 [2024-07-25 05:47:48.099721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1454ee0 with addr=10.0.0.2, port=4420 00:27:54.538 [2024-07-25 05:47:48.099738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1454ee0 is same with the state(5) to be set 00:27:54.538 [2024-07-25 05:47:48.099868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.538 [2024-07-25 05:47:48.099893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18953c0 with addr=10.0.0.2, port=4420 00:27:54.538 [2024-07-25 05:47:48.099908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18953c0 is same with the state(5) to be set 00:27:54.538 [2024-07-25 05:47:48.100034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.538 [2024-07-25 05:47:48.100059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a20cd0 with addr=10.0.0.2, port=4420 00:27:54.538 [2024-07-25 05:47:48.100074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20cd0 is same with the state(5) to be set 00:27:54.538 [2024-07-25 05:47:48.100664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.100687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.538 [2024-07-25 05:47:48.100711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.538 [2024-07-25 05:47:48.100727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.100745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.100759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.100781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.100796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.100812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.100826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.100841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.100856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.100871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.100885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.100901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.100916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.100931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.100945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.100961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.100975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.100990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.539 [2024-07-25 05:47:48.101793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.539 [2024-07-25 05:47:48.101809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.101822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.101838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.101852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.101868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.101882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.101898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.101916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.101931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.101945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.101961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.101975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.101990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.102611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.102625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0af20 is same with the state(5) to be set 00:27:54.540 [2024-07-25 05:47:48.103890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.103913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.103938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.103955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.103971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.103985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.104001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.104016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.104032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.104045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.104061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.104075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.104091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.104105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.104121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.104136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.104151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.104165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.104181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.104195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.104211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.104225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.104240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.540 [2024-07-25 05:47:48.104261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.540 [2024-07-25 05:47:48.104277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.104973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.104987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.541 [2024-07-25 05:47:48.105426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.541 [2024-07-25 05:47:48.105441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.105475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.105504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.105534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.105563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.105593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.105623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.105652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.105682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.105712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.105741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.105770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.105800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.105833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.105848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff0370 is same with the state(5) to be set 00:27:54.542 [2024-07-25 05:47:48.107106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.542 [2024-07-25 05:47:48.107905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.542 [2024-07-25 05:47:48.107920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.107935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.107950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.107965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.107981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.107994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.543 [2024-07-25 05:47:48.108915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.543 [2024-07-25 05:47:48.108929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.108945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.108959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.108974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.108988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.109007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.109021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.109037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.109051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.109065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2197df0 is same with the state(5) to be set 00:27:54.544 [2024-07-25 05:47:48.110585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:54.544 [2024-07-25 05:47:48.110618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:54.544 [2024-07-25 05:47:48.110638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:54.544 [2024-07-25 05:47:48.110655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:54.544 [2024-07-25 05:47:48.110672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:54.544 [2024-07-25 05:47:48.111033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.544 [2024-07-25 05:47:48.111063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x188d940 with addr=10.0.0.2, port=4420 00:27:54.544 [2024-07-25 05:47:48.111079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d940 is same with the state(5) to be set 00:27:54.544 [2024-07-25 05:47:48.111105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1454ee0 (9): Bad file descriptor 00:27:54.544 [2024-07-25 05:47:48.111126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18953c0 (9): Bad file descriptor 00:27:54.544 [2024-07-25 05:47:48.111144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a20cd0 (9): Bad file descriptor 00:27:54.544 [2024-07-25 05:47:48.111217] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:54.544 [2024-07-25 05:47:48.111254] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:54.544 [2024-07-25 05:47:48.111275] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:54.544 [2024-07-25 05:47:48.111295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188d940 (9): Bad file descriptor 00:27:54.544 [2024-07-25 05:47:48.111535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.544 [2024-07-25 05:47:48.111562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x194a4b0 with addr=10.0.0.2, port=4420 00:27:54.544 [2024-07-25 05:47:48.111578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194a4b0 is same with the state(5) to be set 00:27:54.544 [2024-07-25 05:47:48.111727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.544 [2024-07-25 05:47:48.111753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x145bf90 with addr=10.0.0.2, port=4420 00:27:54.544 [2024-07-25 05:47:48.111768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145bf90 is same with the state(5) to be set 00:27:54.544 [2024-07-25 05:47:48.111890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.544 [2024-07-25 05:47:48.111914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1898ff0 with addr=10.0.0.2, port=4420 00:27:54.544 [2024-07-25 05:47:48.111930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898ff0 is same with the state(5) to be set 00:27:54.544 [2024-07-25 05:47:48.112166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.544 [2024-07-25 05:47:48.112190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1944f20 with addr=10.0.0.2, port=4420 00:27:54.544 [2024-07-25 05:47:48.112205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944f20 is same with the state(5) to be set 00:27:54.544 [2024-07-25 05:47:48.112318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.544 [2024-07-25 05:47:48.112344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1383610 with addr=10.0.0.2, port=4420 00:27:54.544 [2024-07-25 05:47:48.112359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383610 is same with the state(5) to be set 00:27:54.544 [2024-07-25 05:47:48.112377] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:54.544 [2024-07-25 05:47:48.112391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:54.544 [2024-07-25 05:47:48.112408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.544 [2024-07-25 05:47:48.112428] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:54.544 [2024-07-25 05:47:48.112442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:54.544 [2024-07-25 05:47:48.112455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:54.544 [2024-07-25 05:47:48.112471] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:54.544 [2024-07-25 05:47:48.112485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:54.544 [2024-07-25 05:47:48.112498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:54.544 [2024-07-25 05:47:48.113345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.544 [2024-07-25 05:47:48.113833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.544 [2024-07-25 05:47:48.113848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.113862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.113878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.113892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.113907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.113926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.113942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.113956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.113972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.113985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.114981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.114996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.545 [2024-07-25 05:47:48.115010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.545 [2024-07-25 05:47:48.115025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.546 [2024-07-25 05:47:48.115039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.546 [2024-07-25 05:47:48.115058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.546 [2024-07-25 05:47:48.115073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.546 [2024-07-25 05:47:48.115088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.546 [2024-07-25 05:47:48.115102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.546 [2024-07-25 05:47:48.115118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.546 [2024-07-25 05:47:48.115131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.546 [2024-07-25 05:47:48.115147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.546 [2024-07-25 05:47:48.115161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.546 [2024-07-25 05:47:48.115176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.546 [2024-07-25 05:47:48.115190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.546 [2024-07-25 05:47:48.115205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.546 [2024-07-25 05:47:48.115219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.546 [2024-07-25 05:47:48.115234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.546 [2024-07-25 05:47:48.115256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.546 [2024-07-25 05:47:48.115272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.546 [2024-07-25 05:47:48.115287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.546 [2024-07-25 05:47:48.115301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1984c80 is same with the state(5) to be set 00:27:54.546 [2024-07-25 05:47:48.116994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.546 [2024-07-25 05:47:48.117020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.546 [2024-07-25 05:47:48.117032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.546 task offset: 28160 on job bdev=Nvme2n1 fails 00:27:54.546 00:27:54.546 Latency(us) 00:27:54.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.546 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.546 Job: Nvme1n1 ended in about 1.01 seconds with error 00:27:54.546 Verification LBA range: start 0x0 length 0x400 00:27:54.546 Nvme1n1 : 1.01 126.62 7.91 63.31 0.00 333639.43 40777.96 288940.94 00:27:54.546 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.546 Job: Nvme2n1 ended in about 0.99 seconds with error 00:27:54.546 Verification LBA range: start 0x0 length 0x400 00:27:54.546 Nvme2n1 : 0.99 197.57 12.35 64.51 0.00 237108.64 4296.25 259425.47 00:27:54.546 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.546 Job: Nvme3n1 ended in about 1.01 seconds with error 00:27:54.546 Verification LBA range: start 0x0 length 0x400 00:27:54.546 Nvme3n1 : 1.01 189.33 11.83 63.11 0.00 241598.39 18058.81 257872.02 00:27:54.546 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.546 Job: Nvme4n1 ended in about 1.02 seconds with error 00:27:54.546 Verification LBA range: start 0x0 length 0x400 00:27:54.546 Nvme4n1 : 1.02 191.51 11.97 62.53 0.00 235630.49 10437.21 250104.79 00:27:54.546 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.546 Job: Nvme5n1 ended in about 1.00 seconds with error 00:27:54.546 Verification LBA range: start 0x0 length 0x400 00:27:54.546 Nvme5n1 : 1.00 192.69 12.04 64.23 0.00 227959.85 19709.35 260978.92 00:27:54.546 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.546 Job: Nvme6n1 ended in about 1.03 seconds with error 00:27:54.546 Verification LBA range: start 0x0 length 0x400 00:27:54.546 Nvme6n1 : 1.03 124.68 7.79 62.34 0.00 308104.66 19903.53 276513.37 00:27:54.546 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.546 Job: Nvme7n1 ended in about 1.03 seconds with error 00:27:54.546 Verification LBA range: start 0x0 length 0x400 00:27:54.546 Nvme7n1 : 1.03 186.44 11.65 62.15 0.00 227412.76 20486.07 257872.02 00:27:54.546 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.546 Job: Nvme8n1 ended in about 1.00 seconds with error 00:27:54.546 Verification LBA range: start 0x0 length 0x400 00:27:54.546 Nvme8n1 : 1.00 191.42 11.96 63.81 0.00 216361.20 5145.79 274959.93 00:27:54.546 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.546 Job: Nvme9n1 ended in about 1.04 seconds with error 00:27:54.546 Verification LBA range: start 0x0 length 0x400 00:27:54.546 Nvme9n1 : 1.04 185.31 11.58 61.77 0.00 220114.68 21359.88 257872.02 00:27:54.546 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.546 Job: Nvme10n1 ended in about 1.02 seconds with error 00:27:54.546 Verification LBA range: start 0x0 length 0x400 00:27:54.546 Nvme10n1 : 1.02 125.81 7.86 62.91 0.00 281522.88 22913.33 284280.60 00:27:54.546 =================================================================================================================== 00:27:54.546 Total : 1711.38 106.96 630.67 0.00 248460.37 4296.25 288940.94 00:27:54.546 [2024-07-25 05:47:48.144573] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:54.546 [2024-07-25 05:47:48.144668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:54.546 [2024-07-25 05:47:48.144754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194a4b0 (9): Bad file descriptor 00:27:54.546 [2024-07-25 05:47:48.144786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145bf90 (9): Bad file descriptor 00:27:54.546 [2024-07-25 05:47:48.144805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1898ff0 (9): Bad file descriptor 00:27:54.546 [2024-07-25 05:47:48.144824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1944f20 (9): Bad file descriptor 00:27:54.546 [2024-07-25 05:47:48.144842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1383610 (9): Bad file descriptor 00:27:54.546 [2024-07-25 05:47:48.144860] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:54.546 [2024-07-25 05:47:48.144874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:54.546 [2024-07-25 05:47:48.144892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:54.546 [2024-07-25 05:47:48.145086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.546 [2024-07-25 05:47:48.145436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.546 [2024-07-25 05:47:48.145474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2bb50 with addr=10.0.0.2, port=4420 00:27:54.546 [2024-07-25 05:47:48.145506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2bb50 is same with the state(5) to be set 00:27:54.546 [2024-07-25 05:47:48.145522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:54.546 [2024-07-25 05:47:48.145535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:54.546 [2024-07-25 05:47:48.145549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:54.546 [2024-07-25 05:47:48.145567] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:54.546 [2024-07-25 05:47:48.145580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:54.546 [2024-07-25 05:47:48.145593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:54.546 [2024-07-25 05:47:48.145609] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:54.546 [2024-07-25 05:47:48.145622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:54.546 [2024-07-25 05:47:48.145635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:54.546 [2024-07-25 05:47:48.145650] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:54.546 [2024-07-25 05:47:48.145663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:54.546 [2024-07-25 05:47:48.145676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:54.546 [2024-07-25 05:47:48.145693] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:54.546 [2024-07-25 05:47:48.145706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:54.546 [2024-07-25 05:47:48.145718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:54.546 [2024-07-25 05:47:48.145773] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:54.546 [2024-07-25 05:47:48.145796] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:54.546 [2024-07-25 05:47:48.145813] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:54.546 [2024-07-25 05:47:48.145832] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:54.546 [2024-07-25 05:47:48.145849] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:54.546 [2024-07-25 05:47:48.146257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.546 [2024-07-25 05:47:48.146280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.547 [2024-07-25 05:47:48.146293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.547 [2024-07-25 05:47:48.146304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.547 [2024-07-25 05:47:48.146316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.547 [2024-07-25 05:47:48.146340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2bb50 (9): Bad file descriptor 00:27:54.547 [2024-07-25 05:47:48.146408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:54.547 [2024-07-25 05:47:48.146433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:54.547 [2024-07-25 05:47:48.146469] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:54.547 [2024-07-25 05:47:48.146490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:54.547 [2024-07-25 05:47:48.146504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:54.547 [2024-07-25 05:47:48.146543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:54.547 [2024-07-25 05:47:48.146564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:54.547 [2024-07-25 05:47:48.146591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.547 [2024-07-25 05:47:48.146866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.547 [2024-07-25 05:47:48.146894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a20cd0 with addr=10.0.0.2, port=4420 00:27:54.547 [2024-07-25 05:47:48.146910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20cd0 is same with the state(5) to be set 00:27:54.547 [2024-07-25 05:47:48.147032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.547 [2024-07-25 05:47:48.147057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18953c0 with addr=10.0.0.2, port=4420 00:27:54.547 [2024-07-25 05:47:48.147072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18953c0 is same with the state(5) to be set 00:27:54.547 [2024-07-25 05:47:48.147227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.547 [2024-07-25 05:47:48.147261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1454ee0 with addr=10.0.0.2, port=4420 00:27:54.547 [2024-07-25 05:47:48.147278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1454ee0 is same with the state(5) to be set 00:27:54.547 [2024-07-25 05:47:48.147392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.547 [2024-07-25 05:47:48.147417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x188d940 with addr=10.0.0.2, port=4420 00:27:54.547 [2024-07-25 05:47:48.147432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d940 is same with the state(5) to be set 00:27:54.547 [2024-07-25 05:47:48.147450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a20cd0 (9): Bad file descriptor 00:27:54.547 [2024-07-25 05:47:48.147469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18953c0 (9): Bad file descriptor 00:27:54.547 [2024-07-25 05:47:48.147510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1454ee0 (9): Bad file descriptor 00:27:54.547 [2024-07-25 05:47:48.147533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188d940 (9): Bad file descriptor 00:27:54.547 [2024-07-25 05:47:48.147549] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:54.547 [2024-07-25 05:47:48.147562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:54.547 [2024-07-25 05:47:48.147575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:54.547 [2024-07-25 05:47:48.147590] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:54.547 [2024-07-25 05:47:48.147604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:54.547 [2024-07-25 05:47:48.147616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:54.547 [2024-07-25 05:47:48.147655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.547 [2024-07-25 05:47:48.147672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.547 [2024-07-25 05:47:48.147683] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:54.547 [2024-07-25 05:47:48.147700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:54.547 [2024-07-25 05:47:48.147713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.547 [2024-07-25 05:47:48.147729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:54.547 [2024-07-25 05:47:48.147742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:54.547 [2024-07-25 05:47:48.147754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:54.547 [2024-07-25 05:47:48.147789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.547 [2024-07-25 05:47:48.147805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.118 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:55.118 05:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1709882 00:27:56.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1709882) - No such process 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:56.050 rmmod nvme_tcp 00:27:56.050 rmmod nvme_fabrics 00:27:56.050 rmmod nvme_keyring 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.050 05:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.584 05:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:58.584 00:27:58.584 real 0m7.479s 00:27:58.584 user 0m18.048s 00:27:58.584 sys 0m1.555s 00:27:58.584 05:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:58.584 05:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.584 ************************************ 00:27:58.584 END TEST nvmf_shutdown_tc3 00:27:58.584 ************************************ 00:27:58.584 05:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:58.584 00:27:58.584 real 0m26.700s 00:27:58.584 user 1m13.367s 00:27:58.584 sys 0m6.394s 00:27:58.584 05:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:58.584 05:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:58.584 ************************************ 00:27:58.584 END TEST nvmf_shutdown 00:27:58.584 ************************************ 00:27:58.584 05:47:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:27:58.584 00:27:58.584 real 16m45.063s 00:27:58.584 user 47m6.090s 00:27:58.584 sys 3m52.292s 00:27:58.584 05:47:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:58.585 05:47:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:58.585 ************************************ 00:27:58.585 END TEST nvmf_target_extra 00:27:58.585 ************************************ 00:27:58.585 05:47:51 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:58.585 05:47:51 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:58.585 05:47:51 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:58.585 05:47:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:58.585 ************************************ 00:27:58.585 START TEST nvmf_host 00:27:58.585 ************************************ 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:58.585 * Looking for test storage... 00:27:58.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.585 ************************************ 00:27:58.585 START TEST nvmf_multicontroller 00:27:58.585 ************************************ 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:58.585 * Looking for test storage... 00:27:58.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.585 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:58.586 05:47:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:00.487 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:00.487 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:00.487 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:00.487 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:00.487 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.488 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.488 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.488 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.488 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:00.488 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.488 05:47:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:00.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:28:00.488 00:28:00.488 --- 10.0.0.2 ping statistics --- 00:28:00.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.488 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:28:00.488 00:28:00.488 --- 10.0.0.1 ping statistics --- 00:28:00.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.488 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1712316 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1712316 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1712316 ']' 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:00.488 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.488 [2024-07-25 05:47:54.094406] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:28:00.488 [2024-07-25 05:47:54.094479] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.488 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.488 [2024-07-25 05:47:54.162385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:00.747 [2024-07-25 05:47:54.255442] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.747 [2024-07-25 05:47:54.255509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.747 [2024-07-25 05:47:54.255525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.747 [2024-07-25 05:47:54.255538] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.747 [2024-07-25 05:47:54.255559] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.747 [2024-07-25 05:47:54.255645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.747 [2024-07-25 05:47:54.255767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:00.747 [2024-07-25 05:47:54.255770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.747 [2024-07-25 05:47:54.399475] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.747 Malloc0 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.747 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.005 [2024-07-25 05:47:54.459284] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.005 [2024-07-25 05:47:54.467110] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.005 Malloc1 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1712455 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1712455 /var/tmp/bdevperf.sock 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1712455 ']' 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:01.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:01.005 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.264 NVMe0n1 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.264 1 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:01.264 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.265 request: 00:28:01.265 { 00:28:01.265 "name": "NVMe0", 00:28:01.265 "trtype": "tcp", 00:28:01.265 "traddr": "10.0.0.2", 00:28:01.265 "adrfam": "ipv4", 00:28:01.265 "trsvcid": "4420", 00:28:01.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:01.265 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:01.265 "hostaddr": "10.0.0.2", 00:28:01.265 "hostsvcid": "60000", 00:28:01.265 "prchk_reftag": false, 00:28:01.265 "prchk_guard": false, 00:28:01.265 "hdgst": false, 00:28:01.265 "ddgst": false, 00:28:01.265 "method": "bdev_nvme_attach_controller", 00:28:01.265 "req_id": 1 00:28:01.265 } 00:28:01.265 Got JSON-RPC error response 00:28:01.265 response: 00:28:01.265 { 00:28:01.265 "code": -114, 00:28:01.265 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:01.265 } 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.265 request: 00:28:01.265 { 00:28:01.265 "name": "NVMe0", 00:28:01.265 "trtype": "tcp", 00:28:01.265 "traddr": "10.0.0.2", 00:28:01.265 "adrfam": "ipv4", 00:28:01.265 "trsvcid": "4420", 00:28:01.265 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:01.265 "hostaddr": "10.0.0.2", 00:28:01.265 "hostsvcid": "60000", 00:28:01.265 "prchk_reftag": false, 00:28:01.265 "prchk_guard": false, 00:28:01.265 "hdgst": false, 00:28:01.265 "ddgst": false, 00:28:01.265 "method": "bdev_nvme_attach_controller", 00:28:01.265 "req_id": 1 00:28:01.265 } 00:28:01.265 Got JSON-RPC error response 00:28:01.265 response: 00:28:01.265 { 00:28:01.265 "code": -114, 00:28:01.265 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:01.265 } 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.265 request: 00:28:01.265 { 00:28:01.265 "name": "NVMe0", 00:28:01.265 "trtype": "tcp", 00:28:01.265 "traddr": "10.0.0.2", 00:28:01.265 "adrfam": "ipv4", 00:28:01.265 "trsvcid": "4420", 00:28:01.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:01.265 "hostaddr": "10.0.0.2", 00:28:01.265 "hostsvcid": "60000", 00:28:01.265 "prchk_reftag": false, 00:28:01.265 "prchk_guard": false, 00:28:01.265 "hdgst": false, 00:28:01.265 "ddgst": false, 00:28:01.265 "multipath": "disable", 00:28:01.265 "method": "bdev_nvme_attach_controller", 00:28:01.265 "req_id": 1 00:28:01.265 } 00:28:01.265 Got JSON-RPC error response 00:28:01.265 response: 00:28:01.265 { 00:28:01.265 "code": -114, 00:28:01.265 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:01.265 } 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.265 request: 00:28:01.265 { 00:28:01.265 "name": "NVMe0", 00:28:01.265 "trtype": "tcp", 00:28:01.265 "traddr": "10.0.0.2", 00:28:01.265 "adrfam": "ipv4", 00:28:01.265 "trsvcid": "4420", 00:28:01.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:01.265 "hostaddr": "10.0.0.2", 00:28:01.265 "hostsvcid": "60000", 00:28:01.265 "prchk_reftag": false, 00:28:01.265 "prchk_guard": false, 00:28:01.265 "hdgst": false, 00:28:01.265 "ddgst": false, 00:28:01.265 "multipath": "failover", 00:28:01.265 "method": "bdev_nvme_attach_controller", 00:28:01.265 "req_id": 1 00:28:01.265 } 00:28:01.265 Got JSON-RPC error response 00:28:01.265 response: 00:28:01.265 { 00:28:01.265 "code": -114, 00:28:01.265 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:01.265 } 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.265 05:47:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.523 00:28:01.523 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.523 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:01.523 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.523 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.523 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.523 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:01.523 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.523 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.523 00:28:01.523 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.523 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:01.523 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:01.523 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.523 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.781 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.781 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:01.781 05:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:02.715 0 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1712455 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1712455 ']' 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1712455 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1712455 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1712455' 00:28:02.715 killing process with pid 1712455 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1712455 00:28:02.715 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1712455 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:28:02.973 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:02.973 [2024-07-25 05:47:54.568726] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:28:02.973 [2024-07-25 05:47:54.568820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712455 ] 00:28:02.973 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.973 [2024-07-25 05:47:54.630960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.973 [2024-07-25 05:47:54.717217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.973 [2024-07-25 05:47:55.217587] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 9acb5846-e51d-4cac-9bb8-3dab3a4a878b already exists 00:28:02.973 [2024-07-25 05:47:55.217622] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:9acb5846-e51d-4cac-9bb8-3dab3a4a878b alias for bdev NVMe1n1 00:28:02.973 [2024-07-25 05:47:55.217652] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:02.973 Running I/O for 1 seconds... 00:28:02.973 00:28:02.973 Latency(us) 00:28:02.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.973 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:02.973 NVMe0n1 : 1.00 19375.25 75.68 0.00 0.00 6595.21 2572.89 11990.66 00:28:02.973 =================================================================================================================== 00:28:02.973 Total : 19375.25 75.68 0.00 0.00 6595.21 2572.89 11990.66 00:28:02.973 Received shutdown signal, test time was about 1.000000 seconds 00:28:02.973 00:28:02.973 Latency(us) 00:28:02.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.973 =================================================================================================================== 00:28:02.973 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:02.973 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:02.973 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:02.974 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:02.974 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:02.974 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:02.974 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:02.974 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:02.974 rmmod nvme_tcp 00:28:02.974 rmmod nvme_fabrics 00:28:03.232 rmmod nvme_keyring 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1712316 ']' 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1712316 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1712316 ']' 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1712316 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1712316 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1712316' 00:28:03.232 killing process with pid 1712316 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1712316 00:28:03.232 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1712316 00:28:03.490 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:03.490 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:03.490 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:03.490 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:03.491 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:03.491 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.491 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.491 05:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.390 05:47:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:05.390 00:28:05.390 real 0m7.111s 00:28:05.390 user 0m11.037s 00:28:05.390 sys 0m2.140s 00:28:05.390 05:47:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:05.390 05:47:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.390 ************************************ 00:28:05.390 END TEST nvmf_multicontroller 00:28:05.390 ************************************ 00:28:05.390 05:47:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:05.390 05:47:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:05.390 05:47:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:05.390 05:47:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.390 ************************************ 00:28:05.390 START TEST nvmf_aer 00:28:05.390 ************************************ 00:28:05.390 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:05.648 * Looking for test storage... 00:28:05.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.648 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:05.649 05:47:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:07.550 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:07.550 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:07.550 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.550 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:07.551 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.551 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:07.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:28:07.809 00:28:07.809 --- 10.0.0.2 ping statistics --- 00:28:07.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.809 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:28:07.809 00:28:07.809 --- 10.0.0.1 ping statistics --- 00:28:07.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.809 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1714745 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1714745 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1714745 ']' 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:07.809 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:07.809 [2024-07-25 05:48:01.401928] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:28:07.809 [2024-07-25 05:48:01.402024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.809 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.809 [2024-07-25 05:48:01.473031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:08.068 [2024-07-25 05:48:01.568589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.068 [2024-07-25 05:48:01.568658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.068 [2024-07-25 05:48:01.568674] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.068 [2024-07-25 05:48:01.568688] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.068 [2024-07-25 05:48:01.568704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.068 [2024-07-25 05:48:01.568783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.068 [2024-07-25 05:48:01.568861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.068 [2024-07-25 05:48:01.568886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:08.068 [2024-07-25 05:48:01.568888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.068 [2024-07-25 05:48:01.728388] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.068 Malloc0 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.068 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.327 [2024-07-25 05:48:01.779632] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.327 [ 00:28:08.327 { 00:28:08.327 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:08.327 "subtype": "Discovery", 00:28:08.327 "listen_addresses": [], 00:28:08.327 "allow_any_host": true, 00:28:08.327 "hosts": [] 00:28:08.327 }, 00:28:08.327 { 00:28:08.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.327 "subtype": "NVMe", 00:28:08.327 "listen_addresses": [ 00:28:08.327 { 00:28:08.327 "trtype": "TCP", 00:28:08.327 "adrfam": "IPv4", 00:28:08.327 "traddr": "10.0.0.2", 00:28:08.327 "trsvcid": "4420" 00:28:08.327 } 00:28:08.327 ], 00:28:08.327 "allow_any_host": true, 00:28:08.327 "hosts": [], 00:28:08.327 "serial_number": "SPDK00000000000001", 00:28:08.327 "model_number": "SPDK bdev Controller", 00:28:08.327 "max_namespaces": 2, 00:28:08.327 "min_cntlid": 1, 00:28:08.327 "max_cntlid": 65519, 00:28:08.327 "namespaces": [ 00:28:08.327 { 00:28:08.327 "nsid": 1, 00:28:08.327 "bdev_name": "Malloc0", 00:28:08.327 "name": "Malloc0", 00:28:08.327 "nguid": "3AD9A23B8A5C42EC93D6EF413FE1E619", 00:28:08.327 "uuid": "3ad9a23b-8a5c-42ec-93d6-ef413fe1e619" 00:28:08.327 } 00:28:08.327 ] 00:28:08.327 } 00:28:08.327 ] 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1714802 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:08.327 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:08.327 05:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:08.327 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:08.327 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:28:08.327 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:28:08.327 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:08.585 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:08.585 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:08.585 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:08.585 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:08.585 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.586 Malloc1 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.586 Asynchronous Event Request test 00:28:08.586 Attaching to 10.0.0.2 00:28:08.586 Attached to 10.0.0.2 00:28:08.586 Registering asynchronous event callbacks... 00:28:08.586 Starting namespace attribute notice tests for all controllers... 00:28:08.586 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:08.586 aer_cb - Changed Namespace 00:28:08.586 Cleaning up... 00:28:08.586 [ 00:28:08.586 { 00:28:08.586 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:08.586 "subtype": "Discovery", 00:28:08.586 "listen_addresses": [], 00:28:08.586 "allow_any_host": true, 00:28:08.586 "hosts": [] 00:28:08.586 }, 00:28:08.586 { 00:28:08.586 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.586 "subtype": "NVMe", 00:28:08.586 "listen_addresses": [ 00:28:08.586 { 00:28:08.586 "trtype": "TCP", 00:28:08.586 "adrfam": "IPv4", 00:28:08.586 "traddr": "10.0.0.2", 00:28:08.586 "trsvcid": "4420" 00:28:08.586 } 00:28:08.586 ], 00:28:08.586 "allow_any_host": true, 00:28:08.586 "hosts": [], 00:28:08.586 "serial_number": "SPDK00000000000001", 00:28:08.586 "model_number": "SPDK bdev Controller", 00:28:08.586 "max_namespaces": 2, 00:28:08.586 "min_cntlid": 1, 00:28:08.586 "max_cntlid": 65519, 00:28:08.586 "namespaces": [ 00:28:08.586 { 00:28:08.586 "nsid": 1, 00:28:08.586 "bdev_name": "Malloc0", 00:28:08.586 "name": "Malloc0", 00:28:08.586 "nguid": "3AD9A23B8A5C42EC93D6EF413FE1E619", 00:28:08.586 "uuid": "3ad9a23b-8a5c-42ec-93d6-ef413fe1e619" 00:28:08.586 }, 00:28:08.586 { 00:28:08.586 "nsid": 2, 00:28:08.586 "bdev_name": "Malloc1", 00:28:08.586 "name": "Malloc1", 00:28:08.586 "nguid": "E3D50BDE72764D749684DE0C7F22ABE4", 00:28:08.586 "uuid": "e3d50bde-7276-4d74-9684-de0c7f22abe4" 00:28:08.586 } 00:28:08.586 ] 00:28:08.586 } 00:28:08.586 ] 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1714802 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:08.586 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:08.586 rmmod nvme_tcp 00:28:08.586 rmmod nvme_fabrics 00:28:08.846 rmmod nvme_keyring 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1714745 ']' 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1714745 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1714745 ']' 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1714745 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1714745 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1714745' 00:28:08.846 killing process with pid 1714745 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1714745 00:28:08.846 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1714745 00:28:09.104 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:09.104 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:09.104 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:09.104 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:09.104 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:09.104 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.104 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.104 05:48:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.005 05:48:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:11.005 00:28:11.005 real 0m5.528s 00:28:11.005 user 0m4.655s 00:28:11.005 sys 0m1.936s 00:28:11.005 05:48:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:11.005 05:48:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.005 ************************************ 00:28:11.005 END TEST nvmf_aer 00:28:11.005 ************************************ 00:28:11.005 05:48:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:11.005 05:48:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:11.005 05:48:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:11.005 05:48:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.005 ************************************ 00:28:11.005 START TEST nvmf_async_init 00:28:11.005 ************************************ 00:28:11.005 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:11.005 * Looking for test storage... 00:28:11.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:11.006 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:11.264 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:11.264 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:11.264 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:11.264 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:11.264 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:11.264 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:11.264 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6b771768087c407f935af70a2a9e03e7 00:28:11.264 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:11.264 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:11.265 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.265 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:11.265 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:11.265 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:11.265 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.265 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.265 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.265 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:11.265 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:11.265 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:11.265 05:48:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:13.172 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:13.172 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.172 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:13.173 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:13.173 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:13.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:28:13.173 00:28:13.173 --- 10.0.0.2 ping statistics --- 00:28:13.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.173 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:13.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:28:13.173 00:28:13.173 --- 10.0.0.1 ping statistics --- 00:28:13.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.173 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1716864 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1716864 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1716864 ']' 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:13.173 05:48:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.432 [2024-07-25 05:48:06.900211] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:28:13.432 [2024-07-25 05:48:06.900308] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.432 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.432 [2024-07-25 05:48:06.977648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.432 [2024-07-25 05:48:07.066788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.432 [2024-07-25 05:48:07.066851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.432 [2024-07-25 05:48:07.066877] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.432 [2024-07-25 05:48:07.066891] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.432 [2024-07-25 05:48:07.066904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.432 [2024-07-25 05:48:07.066944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.690 [2024-07-25 05:48:07.208213] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.690 null0 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6b771768087c407f935af70a2a9e03e7 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.690 [2024-07-25 05:48:07.248479] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:13.690 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.691 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.949 nvme0n1 00:28:13.949 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.949 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:13.949 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.949 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.949 [ 00:28:13.949 { 00:28:13.949 "name": "nvme0n1", 00:28:13.949 "aliases": [ 00:28:13.949 "6b771768-087c-407f-935a-f70a2a9e03e7" 00:28:13.949 ], 00:28:13.949 "product_name": "NVMe disk", 00:28:13.949 "block_size": 512, 00:28:13.949 "num_blocks": 2097152, 00:28:13.949 "uuid": "6b771768-087c-407f-935a-f70a2a9e03e7", 00:28:13.949 "assigned_rate_limits": { 00:28:13.949 "rw_ios_per_sec": 0, 00:28:13.949 "rw_mbytes_per_sec": 0, 00:28:13.949 "r_mbytes_per_sec": 0, 00:28:13.949 "w_mbytes_per_sec": 0 00:28:13.949 }, 00:28:13.949 "claimed": false, 00:28:13.949 "zoned": false, 00:28:13.949 "supported_io_types": { 00:28:13.949 "read": true, 00:28:13.949 "write": true, 00:28:13.949 "unmap": false, 00:28:13.949 "flush": true, 00:28:13.949 "reset": true, 00:28:13.949 "nvme_admin": true, 00:28:13.949 "nvme_io": true, 00:28:13.949 "nvme_io_md": false, 00:28:13.949 "write_zeroes": true, 00:28:13.949 "zcopy": false, 00:28:13.949 "get_zone_info": false, 00:28:13.949 "zone_management": false, 00:28:13.949 "zone_append": false, 00:28:13.949 "compare": true, 00:28:13.949 "compare_and_write": true, 00:28:13.949 "abort": true, 00:28:13.949 "seek_hole": false, 00:28:13.949 "seek_data": false, 00:28:13.949 "copy": true, 00:28:13.949 "nvme_iov_md": false 00:28:13.949 }, 00:28:13.949 "memory_domains": [ 00:28:13.949 { 00:28:13.949 "dma_device_id": "system", 00:28:13.949 "dma_device_type": 1 00:28:13.949 } 00:28:13.949 ], 00:28:13.949 "driver_specific": { 00:28:13.949 "nvme": [ 00:28:13.949 { 00:28:13.949 "trid": { 00:28:13.949 "trtype": "TCP", 00:28:13.949 "adrfam": "IPv4", 00:28:13.949 "traddr": "10.0.0.2", 00:28:13.949 "trsvcid": "4420", 00:28:13.949 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:13.949 }, 00:28:13.949 "ctrlr_data": { 00:28:13.949 "cntlid": 1, 00:28:13.949 "vendor_id": "0x8086", 00:28:13.949 "model_number": "SPDK bdev Controller", 00:28:13.949 "serial_number": "00000000000000000000", 00:28:13.949 "firmware_revision": "24.09", 00:28:13.949 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:13.949 "oacs": { 00:28:13.949 "security": 0, 00:28:13.949 "format": 0, 00:28:13.949 "firmware": 0, 00:28:13.949 "ns_manage": 0 00:28:13.949 }, 00:28:13.949 "multi_ctrlr": true, 00:28:13.949 "ana_reporting": false 00:28:13.949 }, 00:28:13.949 "vs": { 00:28:13.949 "nvme_version": "1.3" 00:28:13.949 }, 00:28:13.949 "ns_data": { 00:28:13.949 "id": 1, 00:28:13.949 "can_share": true 00:28:13.949 } 00:28:13.949 } 00:28:13.949 ], 00:28:13.949 "mp_policy": "active_passive" 00:28:13.949 } 00:28:13.949 } 00:28:13.949 ] 00:28:13.949 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.949 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:13.949 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.949 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.949 [2024-07-25 05:48:07.501849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:13.949 [2024-07-25 05:48:07.501941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f70d20 (9): Bad file descriptor 00:28:13.949 [2024-07-25 05:48:07.634409] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:13.949 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.949 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:13.949 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.949 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.949 [ 00:28:13.949 { 00:28:13.949 "name": "nvme0n1", 00:28:13.949 "aliases": [ 00:28:13.949 "6b771768-087c-407f-935a-f70a2a9e03e7" 00:28:13.949 ], 00:28:13.949 "product_name": "NVMe disk", 00:28:13.949 "block_size": 512, 00:28:13.949 "num_blocks": 2097152, 00:28:13.949 "uuid": "6b771768-087c-407f-935a-f70a2a9e03e7", 00:28:13.949 "assigned_rate_limits": { 00:28:13.949 "rw_ios_per_sec": 0, 00:28:13.949 "rw_mbytes_per_sec": 0, 00:28:13.949 "r_mbytes_per_sec": 0, 00:28:13.949 "w_mbytes_per_sec": 0 00:28:13.949 }, 00:28:13.949 "claimed": false, 00:28:13.949 "zoned": false, 00:28:13.949 "supported_io_types": { 00:28:13.949 "read": true, 00:28:13.949 "write": true, 00:28:13.949 "unmap": false, 00:28:13.949 "flush": true, 00:28:13.949 "reset": true, 00:28:13.949 "nvme_admin": true, 00:28:13.949 "nvme_io": true, 00:28:13.949 "nvme_io_md": false, 00:28:13.949 "write_zeroes": true, 00:28:13.949 "zcopy": false, 00:28:13.949 "get_zone_info": false, 00:28:13.949 "zone_management": false, 00:28:13.949 "zone_append": false, 00:28:13.949 "compare": true, 00:28:13.949 "compare_and_write": true, 00:28:13.949 "abort": true, 00:28:13.949 "seek_hole": false, 00:28:13.949 "seek_data": false, 00:28:13.949 "copy": true, 00:28:13.949 "nvme_iov_md": false 00:28:13.949 }, 00:28:13.949 "memory_domains": [ 00:28:13.949 { 00:28:13.949 "dma_device_id": "system", 00:28:13.949 "dma_device_type": 1 00:28:13.949 } 00:28:14.208 ], 00:28:14.208 "driver_specific": { 00:28:14.208 "nvme": [ 00:28:14.208 { 00:28:14.208 "trid": { 00:28:14.208 "trtype": "TCP", 00:28:14.208 "adrfam": "IPv4", 00:28:14.208 "traddr": "10.0.0.2", 00:28:14.208 "trsvcid": "4420", 00:28:14.208 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:14.208 }, 00:28:14.208 "ctrlr_data": { 00:28:14.208 "cntlid": 2, 00:28:14.208 "vendor_id": "0x8086", 00:28:14.208 "model_number": "SPDK bdev Controller", 00:28:14.208 "serial_number": "00000000000000000000", 00:28:14.208 "firmware_revision": "24.09", 00:28:14.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.208 "oacs": { 00:28:14.208 "security": 0, 00:28:14.208 "format": 0, 00:28:14.208 "firmware": 0, 00:28:14.208 "ns_manage": 0 00:28:14.208 }, 00:28:14.208 "multi_ctrlr": true, 00:28:14.208 "ana_reporting": false 00:28:14.208 }, 00:28:14.208 "vs": { 00:28:14.208 "nvme_version": "1.3" 00:28:14.208 }, 00:28:14.208 "ns_data": { 00:28:14.208 "id": 1, 00:28:14.208 "can_share": true 00:28:14.208 } 00:28:14.208 } 00:28:14.208 ], 00:28:14.208 "mp_policy": "active_passive" 00:28:14.208 } 00:28:14.208 } 00:28:14.208 ] 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.F5PVnZZw0V 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.F5PVnZZw0V 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.208 [2024-07-25 05:48:07.686497] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:14.208 [2024-07-25 05:48:07.686654] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.F5PVnZZw0V 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.208 [2024-07-25 05:48:07.694507] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.F5PVnZZw0V 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.208 [2024-07-25 05:48:07.702558] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:14.208 [2024-07-25 05:48:07.702629] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:14.208 nvme0n1 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.208 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.208 [ 00:28:14.208 { 00:28:14.208 "name": "nvme0n1", 00:28:14.208 "aliases": [ 00:28:14.208 "6b771768-087c-407f-935a-f70a2a9e03e7" 00:28:14.208 ], 00:28:14.208 "product_name": "NVMe disk", 00:28:14.208 "block_size": 512, 00:28:14.208 "num_blocks": 2097152, 00:28:14.208 "uuid": "6b771768-087c-407f-935a-f70a2a9e03e7", 00:28:14.208 "assigned_rate_limits": { 00:28:14.208 "rw_ios_per_sec": 0, 00:28:14.208 "rw_mbytes_per_sec": 0, 00:28:14.208 "r_mbytes_per_sec": 0, 00:28:14.208 "w_mbytes_per_sec": 0 00:28:14.208 }, 00:28:14.208 "claimed": false, 00:28:14.208 "zoned": false, 00:28:14.208 "supported_io_types": { 00:28:14.208 "read": true, 00:28:14.208 "write": true, 00:28:14.208 "unmap": false, 00:28:14.208 "flush": true, 00:28:14.208 "reset": true, 00:28:14.208 "nvme_admin": true, 00:28:14.208 "nvme_io": true, 00:28:14.208 "nvme_io_md": false, 00:28:14.208 "write_zeroes": true, 00:28:14.208 "zcopy": false, 00:28:14.208 "get_zone_info": false, 00:28:14.208 "zone_management": false, 00:28:14.208 "zone_append": false, 00:28:14.208 "compare": true, 00:28:14.208 "compare_and_write": true, 00:28:14.208 "abort": true, 00:28:14.208 "seek_hole": false, 00:28:14.208 "seek_data": false, 00:28:14.208 "copy": true, 00:28:14.208 "nvme_iov_md": false 00:28:14.208 }, 00:28:14.208 "memory_domains": [ 00:28:14.208 { 00:28:14.208 "dma_device_id": "system", 00:28:14.208 "dma_device_type": 1 00:28:14.208 } 00:28:14.208 ], 00:28:14.208 "driver_specific": { 00:28:14.208 "nvme": [ 00:28:14.208 { 00:28:14.208 "trid": { 00:28:14.208 "trtype": "TCP", 00:28:14.208 "adrfam": "IPv4", 00:28:14.208 "traddr": "10.0.0.2", 00:28:14.208 "trsvcid": "4421", 00:28:14.208 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:14.208 }, 00:28:14.208 "ctrlr_data": { 00:28:14.208 "cntlid": 3, 00:28:14.208 "vendor_id": "0x8086", 00:28:14.208 "model_number": "SPDK bdev Controller", 00:28:14.208 "serial_number": "00000000000000000000", 00:28:14.208 "firmware_revision": "24.09", 00:28:14.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.208 "oacs": { 00:28:14.208 "security": 0, 00:28:14.208 "format": 0, 00:28:14.208 "firmware": 0, 00:28:14.208 "ns_manage": 0 00:28:14.208 }, 00:28:14.208 "multi_ctrlr": true, 00:28:14.208 "ana_reporting": false 00:28:14.208 }, 00:28:14.208 "vs": { 00:28:14.208 "nvme_version": "1.3" 00:28:14.208 }, 00:28:14.208 "ns_data": { 00:28:14.208 "id": 1, 00:28:14.208 "can_share": true 00:28:14.208 } 00:28:14.208 } 00:28:14.208 ], 00:28:14.208 "mp_policy": "active_passive" 00:28:14.209 } 00:28:14.209 } 00:28:14.209 ] 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.F5PVnZZw0V 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:14.209 rmmod nvme_tcp 00:28:14.209 rmmod nvme_fabrics 00:28:14.209 rmmod nvme_keyring 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1716864 ']' 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1716864 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1716864 ']' 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1716864 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1716864 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1716864' 00:28:14.209 killing process with pid 1716864 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1716864 00:28:14.209 [2024-07-25 05:48:07.903790] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:14.209 [2024-07-25 05:48:07.903831] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:14.209 05:48:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1716864 00:28:14.467 05:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:14.467 05:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:14.467 05:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:14.467 05:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:14.467 05:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:14.467 05:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.467 05:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.467 05:48:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:16.996 00:28:16.996 real 0m5.504s 00:28:16.996 user 0m2.127s 00:28:16.996 sys 0m1.747s 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.996 ************************************ 00:28:16.996 END TEST nvmf_async_init 00:28:16.996 ************************************ 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.996 ************************************ 00:28:16.996 START TEST dma 00:28:16.996 ************************************ 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:16.996 * Looking for test storage... 00:28:16.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.996 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:28:16.997 00:28:16.997 real 0m0.075s 00:28:16.997 user 0m0.038s 00:28:16.997 sys 0m0.043s 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:28:16.997 ************************************ 00:28:16.997 END TEST dma 00:28:16.997 ************************************ 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.997 ************************************ 00:28:16.997 START TEST nvmf_identify 00:28:16.997 ************************************ 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:16.997 * Looking for test storage... 00:28:16.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:16.997 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:16.998 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.998 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.998 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.998 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:16.998 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:16.998 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:16.998 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:18.894 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:18.894 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:18.894 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:18.894 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:18.894 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:18.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:28:18.895 00:28:18.895 --- 10.0.0.2 ping statistics --- 00:28:18.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.895 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:28:18.895 00:28:18.895 --- 10.0.0.1 ping statistics --- 00:28:18.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.895 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1719378 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1719378 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1719378 ']' 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:18.895 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:18.895 [2024-07-25 05:48:12.437028] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:28:18.895 [2024-07-25 05:48:12.437120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.895 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.895 [2024-07-25 05:48:12.508477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.153 [2024-07-25 05:48:12.601093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.153 [2024-07-25 05:48:12.601147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.153 [2024-07-25 05:48:12.601163] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.153 [2024-07-25 05:48:12.601176] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.153 [2024-07-25 05:48:12.601187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.153 [2024-07-25 05:48:12.601267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.153 [2024-07-25 05:48:12.601298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.153 [2024-07-25 05:48:12.601413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.153 [2024-07-25 05:48:12.601415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.153 [2024-07-25 05:48:12.718302] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.153 Malloc0 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.153 [2024-07-25 05:48:12.789253] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.153 [ 00:28:19.153 { 00:28:19.153 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:19.153 "subtype": "Discovery", 00:28:19.153 "listen_addresses": [ 00:28:19.153 { 00:28:19.153 "trtype": "TCP", 00:28:19.153 "adrfam": "IPv4", 00:28:19.153 "traddr": "10.0.0.2", 00:28:19.153 "trsvcid": "4420" 00:28:19.153 } 00:28:19.153 ], 00:28:19.153 "allow_any_host": true, 00:28:19.153 "hosts": [] 00:28:19.153 }, 00:28:19.153 { 00:28:19.153 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:19.153 "subtype": "NVMe", 00:28:19.153 "listen_addresses": [ 00:28:19.153 { 00:28:19.153 "trtype": "TCP", 00:28:19.153 "adrfam": "IPv4", 00:28:19.153 "traddr": "10.0.0.2", 00:28:19.153 "trsvcid": "4420" 00:28:19.153 } 00:28:19.153 ], 00:28:19.153 "allow_any_host": true, 00:28:19.153 "hosts": [], 00:28:19.153 "serial_number": "SPDK00000000000001", 00:28:19.153 "model_number": "SPDK bdev Controller", 00:28:19.153 "max_namespaces": 32, 00:28:19.153 "min_cntlid": 1, 00:28:19.153 "max_cntlid": 65519, 00:28:19.153 "namespaces": [ 00:28:19.153 { 00:28:19.153 "nsid": 1, 00:28:19.153 "bdev_name": "Malloc0", 00:28:19.153 "name": "Malloc0", 00:28:19.153 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:19.153 "eui64": "ABCDEF0123456789", 00:28:19.153 "uuid": "4dea58e9-9078-49d3-b4e7-47ece7384a88" 00:28:19.153 } 00:28:19.153 ] 00:28:19.153 } 00:28:19.153 ] 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.153 05:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:19.153 [2024-07-25 05:48:12.826567] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:28:19.153 [2024-07-25 05:48:12.826605] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719517 ] 00:28:19.153 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.415 [2024-07-25 05:48:12.857591] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:19.415 [2024-07-25 05:48:12.857654] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:19.415 [2024-07-25 05:48:12.857664] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:19.415 [2024-07-25 05:48:12.857679] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:19.415 [2024-07-25 05:48:12.857693] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:19.415 [2024-07-25 05:48:12.861292] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:19.415 [2024-07-25 05:48:12.861341] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1868ae0 0 00:28:19.415 [2024-07-25 05:48:12.868250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:19.415 [2024-07-25 05:48:12.868278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:19.415 [2024-07-25 05:48:12.868289] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:19.415 [2024-07-25 05:48:12.868296] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:19.415 [2024-07-25 05:48:12.868363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.415 [2024-07-25 05:48:12.868376] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.415 [2024-07-25 05:48:12.868384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ae0) 00:28:19.415 [2024-07-25 05:48:12.868404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:19.415 [2024-07-25 05:48:12.868431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf240, cid 0, qid 0 00:28:19.415 [2024-07-25 05:48:12.875257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.415 [2024-07-25 05:48:12.875276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.415 [2024-07-25 05:48:12.875284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.415 [2024-07-25 05:48:12.875292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf240) on tqpair=0x1868ae0 00:28:19.415 [2024-07-25 05:48:12.875308] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:19.415 [2024-07-25 05:48:12.875320] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:19.415 [2024-07-25 05:48:12.875330] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:19.415 [2024-07-25 05:48:12.875355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.415 [2024-07-25 05:48:12.875365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.415 [2024-07-25 05:48:12.875371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ae0) 00:28:19.415 [2024-07-25 05:48:12.875383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.415 [2024-07-25 05:48:12.875412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf240, cid 0, qid 0 00:28:19.415 [2024-07-25 05:48:12.875578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.416 [2024-07-25 05:48:12.875594] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.416 [2024-07-25 05:48:12.875602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.875610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf240) on tqpair=0x1868ae0 00:28:19.416 [2024-07-25 05:48:12.875624] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:19.416 [2024-07-25 05:48:12.875639] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:19.416 [2024-07-25 05:48:12.875651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.875659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.875665] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ae0) 00:28:19.416 [2024-07-25 05:48:12.875676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.416 [2024-07-25 05:48:12.875698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf240, cid 0, qid 0 00:28:19.416 [2024-07-25 05:48:12.875811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.416 [2024-07-25 05:48:12.875827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.416 [2024-07-25 05:48:12.875834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.875841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf240) on tqpair=0x1868ae0 00:28:19.416 [2024-07-25 05:48:12.875850] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:19.416 [2024-07-25 05:48:12.875865] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:19.416 [2024-07-25 05:48:12.875877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.875885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.875892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ae0) 00:28:19.416 [2024-07-25 05:48:12.875902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.416 [2024-07-25 05:48:12.875923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf240, cid 0, qid 0 00:28:19.416 [2024-07-25 05:48:12.876037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.416 [2024-07-25 05:48:12.876053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.416 [2024-07-25 05:48:12.876060] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.876068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf240) on tqpair=0x1868ae0 00:28:19.416 [2024-07-25 05:48:12.876077] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:19.416 [2024-07-25 05:48:12.876094] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.876104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.876110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ae0) 00:28:19.416 [2024-07-25 05:48:12.876121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.416 [2024-07-25 05:48:12.876142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf240, cid 0, qid 0 00:28:19.416 [2024-07-25 05:48:12.876261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.416 [2024-07-25 05:48:12.876281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.416 [2024-07-25 05:48:12.876290] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.876297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf240) on tqpair=0x1868ae0 00:28:19.416 [2024-07-25 05:48:12.876306] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:19.416 [2024-07-25 05:48:12.876315] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:19.416 [2024-07-25 05:48:12.876328] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:19.416 [2024-07-25 05:48:12.876439] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:19.416 [2024-07-25 05:48:12.876447] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:19.416 [2024-07-25 05:48:12.876462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.876470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.876476] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ae0) 00:28:19.416 [2024-07-25 05:48:12.876487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.416 [2024-07-25 05:48:12.876509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf240, cid 0, qid 0 00:28:19.416 [2024-07-25 05:48:12.876655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.416 [2024-07-25 05:48:12.876671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.416 [2024-07-25 05:48:12.876679] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.876686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf240) on tqpair=0x1868ae0 00:28:19.416 [2024-07-25 05:48:12.876695] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:19.416 [2024-07-25 05:48:12.876712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.876721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.876727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ae0) 00:28:19.416 [2024-07-25 05:48:12.876738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.416 [2024-07-25 05:48:12.876759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf240, cid 0, qid 0 00:28:19.416 [2024-07-25 05:48:12.876877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.416 [2024-07-25 05:48:12.876889] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.416 [2024-07-25 05:48:12.876897] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.876904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf240) on tqpair=0x1868ae0 00:28:19.416 [2024-07-25 05:48:12.876912] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:19.416 [2024-07-25 05:48:12.876920] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:19.416 [2024-07-25 05:48:12.876933] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:19.416 [2024-07-25 05:48:12.876952] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:19.416 [2024-07-25 05:48:12.876973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.876981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ae0) 00:28:19.416 [2024-07-25 05:48:12.876992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.416 [2024-07-25 05:48:12.877013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf240, cid 0, qid 0 00:28:19.416 [2024-07-25 05:48:12.877157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.416 [2024-07-25 05:48:12.877170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.416 [2024-07-25 05:48:12.877178] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.877185] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1868ae0): datao=0, datal=4096, cccid=0 00:28:19.416 [2024-07-25 05:48:12.877193] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18bf240) on tqpair(0x1868ae0): expected_datao=0, payload_size=4096 00:28:19.416 [2024-07-25 05:48:12.877201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.877218] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.877238] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.917369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.416 [2024-07-25 05:48:12.917388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.416 [2024-07-25 05:48:12.917397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.917404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf240) on tqpair=0x1868ae0 00:28:19.416 [2024-07-25 05:48:12.917417] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:19.416 [2024-07-25 05:48:12.917426] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:19.416 [2024-07-25 05:48:12.917434] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:19.416 [2024-07-25 05:48:12.917443] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:19.416 [2024-07-25 05:48:12.917451] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:19.416 [2024-07-25 05:48:12.917459] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:19.416 [2024-07-25 05:48:12.917474] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:19.416 [2024-07-25 05:48:12.917493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.917502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.416 [2024-07-25 05:48:12.917509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ae0) 00:28:19.416 [2024-07-25 05:48:12.917521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:19.416 [2024-07-25 05:48:12.917545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf240, cid 0, qid 0 00:28:19.416 [2024-07-25 05:48:12.917668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.416 [2024-07-25 05:48:12.917681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.416 [2024-07-25 05:48:12.917688] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.917696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf240) on tqpair=0x1868ae0 00:28:19.417 [2024-07-25 05:48:12.917708] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.917716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.917726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1868ae0) 00:28:19.417 [2024-07-25 05:48:12.917737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.417 [2024-07-25 05:48:12.917748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.917755] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.917761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1868ae0) 00:28:19.417 [2024-07-25 05:48:12.917770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.417 [2024-07-25 05:48:12.917780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.917786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.917793] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1868ae0) 00:28:19.417 [2024-07-25 05:48:12.917801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.417 [2024-07-25 05:48:12.917811] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.917818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.917824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.417 [2024-07-25 05:48:12.917833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.417 [2024-07-25 05:48:12.917842] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:19.417 [2024-07-25 05:48:12.917862] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:19.417 [2024-07-25 05:48:12.917875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.917898] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1868ae0) 00:28:19.417 [2024-07-25 05:48:12.917909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.417 [2024-07-25 05:48:12.917932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf240, cid 0, qid 0 00:28:19.417 [2024-07-25 05:48:12.917943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf3c0, cid 1, qid 0 00:28:19.417 [2024-07-25 05:48:12.917966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf540, cid 2, qid 0 00:28:19.417 [2024-07-25 05:48:12.917974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.417 [2024-07-25 05:48:12.917981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf840, cid 4, qid 0 00:28:19.417 [2024-07-25 05:48:12.918159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.417 [2024-07-25 05:48:12.918171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.417 [2024-07-25 05:48:12.918179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf840) on tqpair=0x1868ae0 00:28:19.417 [2024-07-25 05:48:12.918196] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:19.417 [2024-07-25 05:48:12.918206] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:19.417 [2024-07-25 05:48:12.918223] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1868ae0) 00:28:19.417 [2024-07-25 05:48:12.918250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.417 [2024-07-25 05:48:12.918278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf840, cid 4, qid 0 00:28:19.417 [2024-07-25 05:48:12.918399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.417 [2024-07-25 05:48:12.918412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.417 [2024-07-25 05:48:12.918419] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918426] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1868ae0): datao=0, datal=4096, cccid=4 00:28:19.417 [2024-07-25 05:48:12.918433] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18bf840) on tqpair(0x1868ae0): expected_datao=0, payload_size=4096 00:28:19.417 [2024-07-25 05:48:12.918441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918458] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918467] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.417 [2024-07-25 05:48:12.918539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.417 [2024-07-25 05:48:12.918546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf840) on tqpair=0x1868ae0 00:28:19.417 [2024-07-25 05:48:12.918571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:19.417 [2024-07-25 05:48:12.918607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1868ae0) 00:28:19.417 [2024-07-25 05:48:12.918629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.417 [2024-07-25 05:48:12.918641] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1868ae0) 00:28:19.417 [2024-07-25 05:48:12.918664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.417 [2024-07-25 05:48:12.918690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf840, cid 4, qid 0 00:28:19.417 [2024-07-25 05:48:12.918702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf9c0, cid 5, qid 0 00:28:19.417 [2024-07-25 05:48:12.918852] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.417 [2024-07-25 05:48:12.918868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.417 [2024-07-25 05:48:12.918875] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918882] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1868ae0): datao=0, datal=1024, cccid=4 00:28:19.417 [2024-07-25 05:48:12.918889] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18bf840) on tqpair(0x1868ae0): expected_datao=0, payload_size=1024 00:28:19.417 [2024-07-25 05:48:12.918897] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918907] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918914] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.417 [2024-07-25 05:48:12.918932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.417 [2024-07-25 05:48:12.918940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.918947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf9c0) on tqpair=0x1868ae0 00:28:19.417 [2024-07-25 05:48:12.959378] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.417 [2024-07-25 05:48:12.959401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.417 [2024-07-25 05:48:12.959410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.959418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf840) on tqpair=0x1868ae0 00:28:19.417 [2024-07-25 05:48:12.959435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.959444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1868ae0) 00:28:19.417 [2024-07-25 05:48:12.959456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.417 [2024-07-25 05:48:12.959485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf840, cid 4, qid 0 00:28:19.417 [2024-07-25 05:48:12.959625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.417 [2024-07-25 05:48:12.959641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.417 [2024-07-25 05:48:12.959648] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.959655] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1868ae0): datao=0, datal=3072, cccid=4 00:28:19.417 [2024-07-25 05:48:12.959662] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18bf840) on tqpair(0x1868ae0): expected_datao=0, payload_size=3072 00:28:19.417 [2024-07-25 05:48:12.959670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.959680] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.959688] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.959706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.417 [2024-07-25 05:48:12.959718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.417 [2024-07-25 05:48:12.959725] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.959732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf840) on tqpair=0x1868ae0 00:28:19.417 [2024-07-25 05:48:12.959747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.959756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1868ae0) 00:28:19.417 [2024-07-25 05:48:12.959766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.417 [2024-07-25 05:48:12.959794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf840, cid 4, qid 0 00:28:19.417 [2024-07-25 05:48:12.959928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.417 [2024-07-25 05:48:12.959944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.417 [2024-07-25 05:48:12.959951] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.417 [2024-07-25 05:48:12.959958] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1868ae0): datao=0, datal=8, cccid=4 00:28:19.417 [2024-07-25 05:48:12.959965] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18bf840) on tqpair(0x1868ae0): expected_datao=0, payload_size=8 00:28:19.418 [2024-07-25 05:48:12.959973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.418 [2024-07-25 05:48:12.959983] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.418 [2024-07-25 05:48:12.959990] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:19.418 [2024-07-25 05:48:13.004255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.418 [2024-07-25 05:48:13.004274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.418 [2024-07-25 05:48:13.004282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.418 [2024-07-25 05:48:13.004305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf840) on tqpair=0x1868ae0 00:28:19.418 ===================================================== 00:28:19.418 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:19.418 ===================================================== 00:28:19.418 Controller Capabilities/Features 00:28:19.418 ================================ 00:28:19.418 Vendor ID: 0000 00:28:19.418 Subsystem Vendor ID: 0000 00:28:19.418 Serial Number: .................... 00:28:19.418 Model Number: ........................................ 00:28:19.418 Firmware Version: 24.09 00:28:19.418 Recommended Arb Burst: 0 00:28:19.418 IEEE OUI Identifier: 00 00 00 00:28:19.418 Multi-path I/O 00:28:19.418 May have multiple subsystem ports: No 00:28:19.418 May have multiple controllers: No 00:28:19.418 Associated with SR-IOV VF: No 00:28:19.418 Max Data Transfer Size: 131072 00:28:19.418 Max Number of Namespaces: 0 00:28:19.418 Max Number of I/O Queues: 1024 00:28:19.418 NVMe Specification Version (VS): 1.3 00:28:19.418 NVMe Specification Version (Identify): 1.3 00:28:19.418 Maximum Queue Entries: 128 00:28:19.418 Contiguous Queues Required: Yes 00:28:19.418 Arbitration Mechanisms Supported 00:28:19.418 Weighted Round Robin: Not Supported 00:28:19.418 Vendor Specific: Not Supported 00:28:19.418 Reset Timeout: 15000 ms 00:28:19.418 Doorbell Stride: 4 bytes 00:28:19.418 NVM Subsystem Reset: Not Supported 00:28:19.418 Command Sets Supported 00:28:19.418 NVM Command Set: Supported 00:28:19.418 Boot Partition: Not Supported 00:28:19.418 Memory Page Size Minimum: 4096 bytes 00:28:19.418 Memory Page Size Maximum: 4096 bytes 00:28:19.418 Persistent Memory Region: Not Supported 00:28:19.418 Optional Asynchronous Events Supported 00:28:19.418 Namespace Attribute Notices: Not Supported 00:28:19.418 Firmware Activation Notices: Not Supported 00:28:19.418 ANA Change Notices: Not Supported 00:28:19.418 PLE Aggregate Log Change Notices: Not Supported 00:28:19.418 LBA Status Info Alert Notices: Not Supported 00:28:19.418 EGE Aggregate Log Change Notices: Not Supported 00:28:19.418 Normal NVM Subsystem Shutdown event: Not Supported 00:28:19.418 Zone Descriptor Change Notices: Not Supported 00:28:19.418 Discovery Log Change Notices: Supported 00:28:19.418 Controller Attributes 00:28:19.418 128-bit Host Identifier: Not Supported 00:28:19.418 Non-Operational Permissive Mode: Not Supported 00:28:19.418 NVM Sets: Not Supported 00:28:19.418 Read Recovery Levels: Not Supported 00:28:19.418 Endurance Groups: Not Supported 00:28:19.418 Predictable Latency Mode: Not Supported 00:28:19.418 Traffic Based Keep ALive: Not Supported 00:28:19.418 Namespace Granularity: Not Supported 00:28:19.418 SQ Associations: Not Supported 00:28:19.418 UUID List: Not Supported 00:28:19.418 Multi-Domain Subsystem: Not Supported 00:28:19.418 Fixed Capacity Management: Not Supported 00:28:19.418 Variable Capacity Management: Not Supported 00:28:19.418 Delete Endurance Group: Not Supported 00:28:19.418 Delete NVM Set: Not Supported 00:28:19.418 Extended LBA Formats Supported: Not Supported 00:28:19.418 Flexible Data Placement Supported: Not Supported 00:28:19.418 00:28:19.418 Controller Memory Buffer Support 00:28:19.418 ================================ 00:28:19.418 Supported: No 00:28:19.418 00:28:19.418 Persistent Memory Region Support 00:28:19.418 ================================ 00:28:19.418 Supported: No 00:28:19.418 00:28:19.418 Admin Command Set Attributes 00:28:19.418 ============================ 00:28:19.418 Security Send/Receive: Not Supported 00:28:19.418 Format NVM: Not Supported 00:28:19.418 Firmware Activate/Download: Not Supported 00:28:19.418 Namespace Management: Not Supported 00:28:19.418 Device Self-Test: Not Supported 00:28:19.418 Directives: Not Supported 00:28:19.418 NVMe-MI: Not Supported 00:28:19.418 Virtualization Management: Not Supported 00:28:19.418 Doorbell Buffer Config: Not Supported 00:28:19.418 Get LBA Status Capability: Not Supported 00:28:19.418 Command & Feature Lockdown Capability: Not Supported 00:28:19.418 Abort Command Limit: 1 00:28:19.418 Async Event Request Limit: 4 00:28:19.418 Number of Firmware Slots: N/A 00:28:19.418 Firmware Slot 1 Read-Only: N/A 00:28:19.418 Firmware Activation Without Reset: N/A 00:28:19.418 Multiple Update Detection Support: N/A 00:28:19.418 Firmware Update Granularity: No Information Provided 00:28:19.418 Per-Namespace SMART Log: No 00:28:19.418 Asymmetric Namespace Access Log Page: Not Supported 00:28:19.418 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:19.418 Command Effects Log Page: Not Supported 00:28:19.418 Get Log Page Extended Data: Supported 00:28:19.418 Telemetry Log Pages: Not Supported 00:28:19.418 Persistent Event Log Pages: Not Supported 00:28:19.418 Supported Log Pages Log Page: May Support 00:28:19.418 Commands Supported & Effects Log Page: Not Supported 00:28:19.418 Feature Identifiers & Effects Log Page:May Support 00:28:19.418 NVMe-MI Commands & Effects Log Page: May Support 00:28:19.418 Data Area 4 for Telemetry Log: Not Supported 00:28:19.418 Error Log Page Entries Supported: 128 00:28:19.418 Keep Alive: Not Supported 00:28:19.418 00:28:19.418 NVM Command Set Attributes 00:28:19.418 ========================== 00:28:19.418 Submission Queue Entry Size 00:28:19.418 Max: 1 00:28:19.418 Min: 1 00:28:19.418 Completion Queue Entry Size 00:28:19.418 Max: 1 00:28:19.418 Min: 1 00:28:19.418 Number of Namespaces: 0 00:28:19.418 Compare Command: Not Supported 00:28:19.418 Write Uncorrectable Command: Not Supported 00:28:19.418 Dataset Management Command: Not Supported 00:28:19.418 Write Zeroes Command: Not Supported 00:28:19.418 Set Features Save Field: Not Supported 00:28:19.418 Reservations: Not Supported 00:28:19.418 Timestamp: Not Supported 00:28:19.418 Copy: Not Supported 00:28:19.418 Volatile Write Cache: Not Present 00:28:19.418 Atomic Write Unit (Normal): 1 00:28:19.418 Atomic Write Unit (PFail): 1 00:28:19.418 Atomic Compare & Write Unit: 1 00:28:19.418 Fused Compare & Write: Supported 00:28:19.418 Scatter-Gather List 00:28:19.418 SGL Command Set: Supported 00:28:19.418 SGL Keyed: Supported 00:28:19.418 SGL Bit Bucket Descriptor: Not Supported 00:28:19.418 SGL Metadata Pointer: Not Supported 00:28:19.418 Oversized SGL: Not Supported 00:28:19.418 SGL Metadata Address: Not Supported 00:28:19.418 SGL Offset: Supported 00:28:19.418 Transport SGL Data Block: Not Supported 00:28:19.418 Replay Protected Memory Block: Not Supported 00:28:19.418 00:28:19.418 Firmware Slot Information 00:28:19.418 ========================= 00:28:19.418 Active slot: 0 00:28:19.418 00:28:19.418 00:28:19.418 Error Log 00:28:19.418 ========= 00:28:19.418 00:28:19.418 Active Namespaces 00:28:19.418 ================= 00:28:19.418 Discovery Log Page 00:28:19.418 ================== 00:28:19.418 Generation Counter: 2 00:28:19.418 Number of Records: 2 00:28:19.418 Record Format: 0 00:28:19.418 00:28:19.418 Discovery Log Entry 0 00:28:19.418 ---------------------- 00:28:19.418 Transport Type: 3 (TCP) 00:28:19.418 Address Family: 1 (IPv4) 00:28:19.418 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:19.418 Entry Flags: 00:28:19.418 Duplicate Returned Information: 1 00:28:19.418 Explicit Persistent Connection Support for Discovery: 1 00:28:19.418 Transport Requirements: 00:28:19.418 Secure Channel: Not Required 00:28:19.418 Port ID: 0 (0x0000) 00:28:19.418 Controller ID: 65535 (0xffff) 00:28:19.418 Admin Max SQ Size: 128 00:28:19.418 Transport Service Identifier: 4420 00:28:19.418 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:19.418 Transport Address: 10.0.0.2 00:28:19.418 Discovery Log Entry 1 00:28:19.418 ---------------------- 00:28:19.418 Transport Type: 3 (TCP) 00:28:19.418 Address Family: 1 (IPv4) 00:28:19.419 Subsystem Type: 2 (NVM Subsystem) 00:28:19.419 Entry Flags: 00:28:19.419 Duplicate Returned Information: 0 00:28:19.419 Explicit Persistent Connection Support for Discovery: 0 00:28:19.419 Transport Requirements: 00:28:19.419 Secure Channel: Not Required 00:28:19.419 Port ID: 0 (0x0000) 00:28:19.419 Controller ID: 65535 (0xffff) 00:28:19.419 Admin Max SQ Size: 128 00:28:19.419 Transport Service Identifier: 4420 00:28:19.419 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:19.419 Transport Address: 10.0.0.2 [2024-07-25 05:48:13.004414] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:19.419 [2024-07-25 05:48:13.004439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf240) on tqpair=0x1868ae0 00:28:19.419 [2024-07-25 05:48:13.004453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.419 [2024-07-25 05:48:13.004462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf3c0) on tqpair=0x1868ae0 00:28:19.419 [2024-07-25 05:48:13.004470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.419 [2024-07-25 05:48:13.004478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf540) on tqpair=0x1868ae0 00:28:19.419 [2024-07-25 05:48:13.004486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.419 [2024-07-25 05:48:13.004494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.419 [2024-07-25 05:48:13.004502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.419 [2024-07-25 05:48:13.004520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.004529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.004536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.419 [2024-07-25 05:48:13.004547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-25 05:48:13.004572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.419 [2024-07-25 05:48:13.004698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.419 [2024-07-25 05:48:13.004714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.419 [2024-07-25 05:48:13.004721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.004728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.419 [2024-07-25 05:48:13.004740] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.004748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.004755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.419 [2024-07-25 05:48:13.004765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-25 05:48:13.004792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.419 [2024-07-25 05:48:13.004939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.419 [2024-07-25 05:48:13.004952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.419 [2024-07-25 05:48:13.004959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.004966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.419 [2024-07-25 05:48:13.004975] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:19.419 [2024-07-25 05:48:13.004983] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:19.419 [2024-07-25 05:48:13.004999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005015] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.419 [2024-07-25 05:48:13.005025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-25 05:48:13.005046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.419 [2024-07-25 05:48:13.005162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.419 [2024-07-25 05:48:13.005182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.419 [2024-07-25 05:48:13.005190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.419 [2024-07-25 05:48:13.005215] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005224] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.419 [2024-07-25 05:48:13.005248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-25 05:48:13.005271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.419 [2024-07-25 05:48:13.005388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.419 [2024-07-25 05:48:13.005403] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.419 [2024-07-25 05:48:13.005411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.419 [2024-07-25 05:48:13.005435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.419 [2024-07-25 05:48:13.005461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-25 05:48:13.005482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.419 [2024-07-25 05:48:13.005602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.419 [2024-07-25 05:48:13.005618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.419 [2024-07-25 05:48:13.005625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.419 [2024-07-25 05:48:13.005649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.419 [2024-07-25 05:48:13.005675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-25 05:48:13.005696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.419 [2024-07-25 05:48:13.005812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.419 [2024-07-25 05:48:13.005824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.419 [2024-07-25 05:48:13.005831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005838] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.419 [2024-07-25 05:48:13.005854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.005870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.419 [2024-07-25 05:48:13.005880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-25 05:48:13.005901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.419 [2024-07-25 05:48:13.006016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.419 [2024-07-25 05:48:13.006031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.419 [2024-07-25 05:48:13.006043] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.006050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.419 [2024-07-25 05:48:13.006067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.006076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.006083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.419 [2024-07-25 05:48:13.006093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-25 05:48:13.006114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.419 [2024-07-25 05:48:13.006226] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.419 [2024-07-25 05:48:13.006250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.419 [2024-07-25 05:48:13.006259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.006266] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.419 [2024-07-25 05:48:13.006283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.006293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.006299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.419 [2024-07-25 05:48:13.006310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-25 05:48:13.006331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.419 [2024-07-25 05:48:13.006448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.419 [2024-07-25 05:48:13.006464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.419 [2024-07-25 05:48:13.006471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.419 [2024-07-25 05:48:13.006478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.419 [2024-07-25 05:48:13.006495] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.006504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.006511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.420 [2024-07-25 05:48:13.006521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.420 [2024-07-25 05:48:13.006542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.420 [2024-07-25 05:48:13.006658] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.420 [2024-07-25 05:48:13.006674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.420 [2024-07-25 05:48:13.006681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.006688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.420 [2024-07-25 05:48:13.006704] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.006714] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.006720] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.420 [2024-07-25 05:48:13.006731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.420 [2024-07-25 05:48:13.006751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.420 [2024-07-25 05:48:13.006867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.420 [2024-07-25 05:48:13.006882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.420 [2024-07-25 05:48:13.006890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.006900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.420 [2024-07-25 05:48:13.006918] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.006927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.006934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.420 [2024-07-25 05:48:13.006945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.420 [2024-07-25 05:48:13.006966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.420 [2024-07-25 05:48:13.007081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.420 [2024-07-25 05:48:13.007097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.420 [2024-07-25 05:48:13.007104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.420 [2024-07-25 05:48:13.007128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.420 [2024-07-25 05:48:13.007154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.420 [2024-07-25 05:48:13.007175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.420 [2024-07-25 05:48:13.007290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.420 [2024-07-25 05:48:13.007304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.420 [2024-07-25 05:48:13.007312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007319] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.420 [2024-07-25 05:48:13.007335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.420 [2024-07-25 05:48:13.007362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.420 [2024-07-25 05:48:13.007383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.420 [2024-07-25 05:48:13.007499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.420 [2024-07-25 05:48:13.007515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.420 [2024-07-25 05:48:13.007522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007529] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.420 [2024-07-25 05:48:13.007545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007562] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.420 [2024-07-25 05:48:13.007572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.420 [2024-07-25 05:48:13.007593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.420 [2024-07-25 05:48:13.007708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.420 [2024-07-25 05:48:13.007720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.420 [2024-07-25 05:48:13.007727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.420 [2024-07-25 05:48:13.007754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.420 [2024-07-25 05:48:13.007782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.420 [2024-07-25 05:48:13.007802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.420 [2024-07-25 05:48:13.007917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.420 [2024-07-25 05:48:13.007933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.420 [2024-07-25 05:48:13.007940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.420 [2024-07-25 05:48:13.007963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.420 [2024-07-25 05:48:13.007979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.420 [2024-07-25 05:48:13.007990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.420 [2024-07-25 05:48:13.008011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.421 [2024-07-25 05:48:13.008124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.421 [2024-07-25 05:48:13.008136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.421 [2024-07-25 05:48:13.008144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.421 [2024-07-25 05:48:13.008150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.421 [2024-07-25 05:48:13.008166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.421 [2024-07-25 05:48:13.008176] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.421 [2024-07-25 05:48:13.008182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.421 [2024-07-25 05:48:13.008193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.421 [2024-07-25 05:48:13.008213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.421 [2024-07-25 05:48:13.012252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.421 [2024-07-25 05:48:13.012269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.421 [2024-07-25 05:48:13.012277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.421 [2024-07-25 05:48:13.012285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.421 [2024-07-25 05:48:13.012303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.421 [2024-07-25 05:48:13.012313] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.421 [2024-07-25 05:48:13.012320] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1868ae0) 00:28:19.421 [2024-07-25 05:48:13.012330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.421 [2024-07-25 05:48:13.012353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18bf6c0, cid 3, qid 0 00:28:19.421 [2024-07-25 05:48:13.012502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.421 [2024-07-25 05:48:13.012517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.421 [2024-07-25 05:48:13.012525] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.421 [2024-07-25 05:48:13.012532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18bf6c0) on tqpair=0x1868ae0 00:28:19.421 [2024-07-25 05:48:13.012545] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:28:19.421 00:28:19.421 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:19.421 [2024-07-25 05:48:13.043722] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:28:19.421 [2024-07-25 05:48:13.043767] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719519 ] 00:28:19.421 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.438 [2024-07-25 05:48:13.074987] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:19.438 [2024-07-25 05:48:13.075033] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:19.438 [2024-07-25 05:48:13.075043] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:19.438 [2024-07-25 05:48:13.075056] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:19.438 [2024-07-25 05:48:13.075067] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:19.438 [2024-07-25 05:48:13.075277] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:19.438 [2024-07-25 05:48:13.075317] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa6cae0 0 00:28:19.438 [2024-07-25 05:48:13.081271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:19.438 [2024-07-25 05:48:13.081295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:19.438 [2024-07-25 05:48:13.081312] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:19.438 [2024-07-25 05:48:13.081318] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:19.438 [2024-07-25 05:48:13.081356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.081367] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.081374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6cae0) 00:28:19.438 [2024-07-25 05:48:13.081388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:19.438 [2024-07-25 05:48:13.081414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3240, cid 0, qid 0 00:28:19.438 [2024-07-25 05:48:13.089258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.438 [2024-07-25 05:48:13.089275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.438 [2024-07-25 05:48:13.089297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.089305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3240) on tqpair=0xa6cae0 00:28:19.438 [2024-07-25 05:48:13.089319] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:19.438 [2024-07-25 05:48:13.089330] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:19.438 [2024-07-25 05:48:13.089339] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:19.438 [2024-07-25 05:48:13.089360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.089369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.089376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6cae0) 00:28:19.438 [2024-07-25 05:48:13.089387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.438 [2024-07-25 05:48:13.089416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3240, cid 0, qid 0 00:28:19.438 [2024-07-25 05:48:13.089557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.438 [2024-07-25 05:48:13.089574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.438 [2024-07-25 05:48:13.089581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.089588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3240) on tqpair=0xa6cae0 00:28:19.438 [2024-07-25 05:48:13.089601] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:19.438 [2024-07-25 05:48:13.089616] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:19.438 [2024-07-25 05:48:13.089629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.089636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.089643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6cae0) 00:28:19.438 [2024-07-25 05:48:13.089654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.438 [2024-07-25 05:48:13.089677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3240, cid 0, qid 0 00:28:19.438 [2024-07-25 05:48:13.089840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.438 [2024-07-25 05:48:13.089857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.438 [2024-07-25 05:48:13.089865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.089872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3240) on tqpair=0xa6cae0 00:28:19.438 [2024-07-25 05:48:13.089881] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:19.438 [2024-07-25 05:48:13.089896] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:19.438 [2024-07-25 05:48:13.089909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.089916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.089923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6cae0) 00:28:19.438 [2024-07-25 05:48:13.089934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.438 [2024-07-25 05:48:13.089956] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3240, cid 0, qid 0 00:28:19.438 [2024-07-25 05:48:13.090078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.438 [2024-07-25 05:48:13.090095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.438 [2024-07-25 05:48:13.090102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.090109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3240) on tqpair=0xa6cae0 00:28:19.438 [2024-07-25 05:48:13.090118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:19.438 [2024-07-25 05:48:13.090136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.090146] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.090152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6cae0) 00:28:19.438 [2024-07-25 05:48:13.090163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.438 [2024-07-25 05:48:13.090185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3240, cid 0, qid 0 00:28:19.438 [2024-07-25 05:48:13.090321] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.438 [2024-07-25 05:48:13.090340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.438 [2024-07-25 05:48:13.090349] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.090356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3240) on tqpair=0xa6cae0 00:28:19.438 [2024-07-25 05:48:13.090363] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:19.438 [2024-07-25 05:48:13.090372] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:19.438 [2024-07-25 05:48:13.090385] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:19.438 [2024-07-25 05:48:13.090495] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:19.438 [2024-07-25 05:48:13.090502] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:19.438 [2024-07-25 05:48:13.090515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.090522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.090544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6cae0) 00:28:19.438 [2024-07-25 05:48:13.090554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.438 [2024-07-25 05:48:13.090575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3240, cid 0, qid 0 00:28:19.438 [2024-07-25 05:48:13.090728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.438 [2024-07-25 05:48:13.090743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.438 [2024-07-25 05:48:13.090751] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.090759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3240) on tqpair=0xa6cae0 00:28:19.438 [2024-07-25 05:48:13.090768] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:19.438 [2024-07-25 05:48:13.090786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.090795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.090802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6cae0) 00:28:19.438 [2024-07-25 05:48:13.090812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.438 [2024-07-25 05:48:13.090834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3240, cid 0, qid 0 00:28:19.438 [2024-07-25 05:48:13.090971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.438 [2024-07-25 05:48:13.090988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.438 [2024-07-25 05:48:13.090996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.091003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3240) on tqpair=0xa6cae0 00:28:19.438 [2024-07-25 05:48:13.091011] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:19.438 [2024-07-25 05:48:13.091020] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:19.438 [2024-07-25 05:48:13.091034] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:19.438 [2024-07-25 05:48:13.091048] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:19.438 [2024-07-25 05:48:13.091062] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.438 [2024-07-25 05:48:13.091070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6cae0) 00:28:19.438 [2024-07-25 05:48:13.091084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-25 05:48:13.091122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3240, cid 0, qid 0 00:28:19.439 [2024-07-25 05:48:13.091312] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.439 [2024-07-25 05:48:13.091333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.439 [2024-07-25 05:48:13.091342] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091348] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa6cae0): datao=0, datal=4096, cccid=0 00:28:19.439 [2024-07-25 05:48:13.091356] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac3240) on tqpair(0xa6cae0): expected_datao=0, payload_size=4096 00:28:19.439 [2024-07-25 05:48:13.091363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091373] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091381] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.439 [2024-07-25 05:48:13.091404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.439 [2024-07-25 05:48:13.091411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3240) on tqpair=0xa6cae0 00:28:19.439 [2024-07-25 05:48:13.091429] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:19.439 [2024-07-25 05:48:13.091437] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:19.439 [2024-07-25 05:48:13.091445] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:19.439 [2024-07-25 05:48:13.091452] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:19.439 [2024-07-25 05:48:13.091459] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:19.439 [2024-07-25 05:48:13.091467] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:19.439 [2024-07-25 05:48:13.091482] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:19.439 [2024-07-25 05:48:13.091499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091508] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091515] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6cae0) 00:28:19.439 [2024-07-25 05:48:13.091527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:19.439 [2024-07-25 05:48:13.091560] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3240, cid 0, qid 0 00:28:19.439 [2024-07-25 05:48:13.091719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.439 [2024-07-25 05:48:13.091734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.439 [2024-07-25 05:48:13.091742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3240) on tqpair=0xa6cae0 00:28:19.439 [2024-07-25 05:48:13.091759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6cae0) 00:28:19.439 [2024-07-25 05:48:13.091785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.439 [2024-07-25 05:48:13.091800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa6cae0) 00:28:19.439 [2024-07-25 05:48:13.091823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.439 [2024-07-25 05:48:13.091833] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa6cae0) 00:28:19.439 [2024-07-25 05:48:13.091871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.439 [2024-07-25 05:48:13.091880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6cae0) 00:28:19.439 [2024-07-25 05:48:13.091909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.439 [2024-07-25 05:48:13.091917] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:19.439 [2024-07-25 05:48:13.091936] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:19.439 [2024-07-25 05:48:13.091949] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.091956] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa6cae0) 00:28:19.439 [2024-07-25 05:48:13.091966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-25 05:48:13.092003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3240, cid 0, qid 0 00:28:19.439 [2024-07-25 05:48:13.092020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac33c0, cid 1, qid 0 00:28:19.439 [2024-07-25 05:48:13.092030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3540, cid 2, qid 0 00:28:19.439 [2024-07-25 05:48:13.092038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac36c0, cid 3, qid 0 00:28:19.439 [2024-07-25 05:48:13.092060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3840, cid 4, qid 0 00:28:19.439 [2024-07-25 05:48:13.092218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.439 [2024-07-25 05:48:13.092231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.439 [2024-07-25 05:48:13.092238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.092254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3840) on tqpair=0xa6cae0 00:28:19.439 [2024-07-25 05:48:13.092263] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:19.439 [2024-07-25 05:48:13.092272] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:19.439 [2024-07-25 05:48:13.092291] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:19.439 [2024-07-25 05:48:13.092304] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:19.439 [2024-07-25 05:48:13.092315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.092323] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.092332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa6cae0) 00:28:19.439 [2024-07-25 05:48:13.092343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:19.439 [2024-07-25 05:48:13.092365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3840, cid 4, qid 0 00:28:19.439 [2024-07-25 05:48:13.092503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.439 [2024-07-25 05:48:13.092519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.439 [2024-07-25 05:48:13.092527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.092533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3840) on tqpair=0xa6cae0 00:28:19.439 [2024-07-25 05:48:13.092603] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:19.439 [2024-07-25 05:48:13.092624] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:19.439 [2024-07-25 05:48:13.092639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.092647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa6cae0) 00:28:19.439 [2024-07-25 05:48:13.092672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-25 05:48:13.092695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3840, cid 4, qid 0 00:28:19.439 [2024-07-25 05:48:13.092870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.439 [2024-07-25 05:48:13.092889] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.439 [2024-07-25 05:48:13.092898] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.092904] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa6cae0): datao=0, datal=4096, cccid=4 00:28:19.439 [2024-07-25 05:48:13.092912] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac3840) on tqpair(0xa6cae0): expected_datao=0, payload_size=4096 00:28:19.439 [2024-07-25 05:48:13.092920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.092938] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.439 [2024-07-25 05:48:13.092948] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:19.699 [2024-07-25 05:48:13.137271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.699 [2024-07-25 05:48:13.137290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.699 [2024-07-25 05:48:13.137298] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.699 [2024-07-25 05:48:13.137306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3840) on tqpair=0xa6cae0 00:28:19.699 [2024-07-25 05:48:13.137328] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:19.699 [2024-07-25 05:48:13.137345] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:19.699 [2024-07-25 05:48:13.137363] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:19.699 [2024-07-25 05:48:13.137377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.699 [2024-07-25 05:48:13.137385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa6cae0) 00:28:19.699 [2024-07-25 05:48:13.137396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.699 [2024-07-25 05:48:13.137418] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3840, cid 4, qid 0 00:28:19.699 [2024-07-25 05:48:13.137576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.699 [2024-07-25 05:48:13.137595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.699 [2024-07-25 05:48:13.137608] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.699 [2024-07-25 05:48:13.137615] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa6cae0): datao=0, datal=4096, cccid=4 00:28:19.699 [2024-07-25 05:48:13.137623] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac3840) on tqpair(0xa6cae0): expected_datao=0, payload_size=4096 00:28:19.699 [2024-07-25 05:48:13.137631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.699 [2024-07-25 05:48:13.137649] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.699 [2024-07-25 05:48:13.137658] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:19.699 [2024-07-25 05:48:13.179346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.699 [2024-07-25 05:48:13.179370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.699 [2024-07-25 05:48:13.179379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.699 [2024-07-25 05:48:13.179387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3840) on tqpair=0xa6cae0 00:28:19.699 [2024-07-25 05:48:13.179412] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:19.699 [2024-07-25 05:48:13.179433] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:19.699 [2024-07-25 05:48:13.179448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.699 [2024-07-25 05:48:13.179456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa6cae0) 00:28:19.699 [2024-07-25 05:48:13.179468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.699 [2024-07-25 05:48:13.179492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3840, cid 4, qid 0 00:28:19.700 [2024-07-25 05:48:13.179667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.700 [2024-07-25 05:48:13.179684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.700 [2024-07-25 05:48:13.179692] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.179699] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa6cae0): datao=0, datal=4096, cccid=4 00:28:19.700 [2024-07-25 05:48:13.179706] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac3840) on tqpair(0xa6cae0): expected_datao=0, payload_size=4096 00:28:19.700 [2024-07-25 05:48:13.179714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.179725] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.179732] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.224260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.700 [2024-07-25 05:48:13.224299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.700 [2024-07-25 05:48:13.224309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.224316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3840) on tqpair=0xa6cae0 00:28:19.700 [2024-07-25 05:48:13.224331] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:19.700 [2024-07-25 05:48:13.224348] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:19.700 [2024-07-25 05:48:13.224364] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:19.700 [2024-07-25 05:48:13.224378] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:19.700 [2024-07-25 05:48:13.224387] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:19.700 [2024-07-25 05:48:13.224399] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:19.700 [2024-07-25 05:48:13.224409] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:19.700 [2024-07-25 05:48:13.224417] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:19.700 [2024-07-25 05:48:13.224425] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:19.700 [2024-07-25 05:48:13.224444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.224453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa6cae0) 00:28:19.700 [2024-07-25 05:48:13.224465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-25 05:48:13.224476] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.224483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.224490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa6cae0) 00:28:19.700 [2024-07-25 05:48:13.224499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.700 [2024-07-25 05:48:13.224526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3840, cid 4, qid 0 00:28:19.700 [2024-07-25 05:48:13.224544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac39c0, cid 5, qid 0 00:28:19.700 [2024-07-25 05:48:13.224695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.700 [2024-07-25 05:48:13.224711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.700 [2024-07-25 05:48:13.224719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.224726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3840) on tqpair=0xa6cae0 00:28:19.700 [2024-07-25 05:48:13.224736] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.700 [2024-07-25 05:48:13.224746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.700 [2024-07-25 05:48:13.224753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.224759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac39c0) on tqpair=0xa6cae0 00:28:19.700 [2024-07-25 05:48:13.224776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.224785] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa6cae0) 00:28:19.700 [2024-07-25 05:48:13.224811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-25 05:48:13.224833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac39c0, cid 5, qid 0 00:28:19.700 [2024-07-25 05:48:13.224981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.700 [2024-07-25 05:48:13.225011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.700 [2024-07-25 05:48:13.225019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.225026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac39c0) on tqpair=0xa6cae0 00:28:19.700 [2024-07-25 05:48:13.225042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.225052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa6cae0) 00:28:19.700 [2024-07-25 05:48:13.225062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-25 05:48:13.225084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac39c0, cid 5, qid 0 00:28:19.700 [2024-07-25 05:48:13.225207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.700 [2024-07-25 05:48:13.225223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.700 [2024-07-25 05:48:13.225235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.225250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac39c0) on tqpair=0xa6cae0 00:28:19.700 [2024-07-25 05:48:13.225268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.225278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa6cae0) 00:28:19.700 [2024-07-25 05:48:13.225288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-25 05:48:13.225310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac39c0, cid 5, qid 0 00:28:19.700 [2024-07-25 05:48:13.225430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.700 [2024-07-25 05:48:13.225446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.700 [2024-07-25 05:48:13.225454] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.225461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac39c0) on tqpair=0xa6cae0 00:28:19.700 [2024-07-25 05:48:13.225485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.225497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa6cae0) 00:28:19.700 [2024-07-25 05:48:13.225508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-25 05:48:13.225520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.225528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa6cae0) 00:28:19.700 [2024-07-25 05:48:13.225537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-25 05:48:13.225548] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.225556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa6cae0) 00:28:19.700 [2024-07-25 05:48:13.225565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-25 05:48:13.225592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.225600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa6cae0) 00:28:19.700 [2024-07-25 05:48:13.225609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-25 05:48:13.225631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac39c0, cid 5, qid 0 00:28:19.700 [2024-07-25 05:48:13.225647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3840, cid 4, qid 0 00:28:19.700 [2024-07-25 05:48:13.225659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3b40, cid 6, qid 0 00:28:19.700 [2024-07-25 05:48:13.225666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3cc0, cid 7, qid 0 00:28:19.700 [2024-07-25 05:48:13.225865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.700 [2024-07-25 05:48:13.225884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.700 [2024-07-25 05:48:13.225893] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.225900] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa6cae0): datao=0, datal=8192, cccid=5 00:28:19.700 [2024-07-25 05:48:13.225908] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac39c0) on tqpair(0xa6cae0): expected_datao=0, payload_size=8192 00:28:19.700 [2024-07-25 05:48:13.225915] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.225974] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.225994] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.226006] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.700 [2024-07-25 05:48:13.226015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.700 [2024-07-25 05:48:13.226022] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.226028] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa6cae0): datao=0, datal=512, cccid=4 00:28:19.700 [2024-07-25 05:48:13.226036] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac3840) on tqpair(0xa6cae0): expected_datao=0, payload_size=512 00:28:19.700 [2024-07-25 05:48:13.226043] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.226052] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.226059] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.226067] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.700 [2024-07-25 05:48:13.226076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.700 [2024-07-25 05:48:13.226083] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.700 [2024-07-25 05:48:13.226089] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa6cae0): datao=0, datal=512, cccid=6 00:28:19.701 [2024-07-25 05:48:13.226096] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac3b40) on tqpair(0xa6cae0): expected_datao=0, payload_size=512 00:28:19.701 [2024-07-25 05:48:13.226103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.701 [2024-07-25 05:48:13.226112] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.701 [2024-07-25 05:48:13.226119] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:19.701 [2024-07-25 05:48:13.226127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:19.701 [2024-07-25 05:48:13.226136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:19.701 [2024-07-25 05:48:13.226143] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:19.701 [2024-07-25 05:48:13.226149] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa6cae0): datao=0, datal=4096, cccid=7 00:28:19.701 [2024-07-25 05:48:13.226157] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac3cc0) on tqpair(0xa6cae0): expected_datao=0, payload_size=4096 00:28:19.701 [2024-07-25 05:48:13.226164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.701 [2024-07-25 05:48:13.226173] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:19.701 [2024-07-25 05:48:13.226180] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:19.701 [2024-07-25 05:48:13.226192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.701 [2024-07-25 05:48:13.226202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.701 [2024-07-25 05:48:13.226209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.701 [2024-07-25 05:48:13.226216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac39c0) on tqpair=0xa6cae0 00:28:19.701 [2024-07-25 05:48:13.226257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.701 [2024-07-25 05:48:13.226271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.701 [2024-07-25 05:48:13.226278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.701 [2024-07-25 05:48:13.226285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3840) on tqpair=0xa6cae0 00:28:19.701 [2024-07-25 05:48:13.226303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.701 [2024-07-25 05:48:13.226314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.701 [2024-07-25 05:48:13.226321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.701 [2024-07-25 05:48:13.226328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3b40) on tqpair=0xa6cae0 00:28:19.701 [2024-07-25 05:48:13.226338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.701 [2024-07-25 05:48:13.226349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.701 [2024-07-25 05:48:13.226358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.701 [2024-07-25 05:48:13.226366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3cc0) on tqpair=0xa6cae0 00:28:19.701 ===================================================== 00:28:19.701 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:19.701 ===================================================== 00:28:19.701 Controller Capabilities/Features 00:28:19.701 ================================ 00:28:19.701 Vendor ID: 8086 00:28:19.701 Subsystem Vendor ID: 8086 00:28:19.701 Serial Number: SPDK00000000000001 00:28:19.701 Model Number: SPDK bdev Controller 00:28:19.701 Firmware Version: 24.09 00:28:19.701 Recommended Arb Burst: 6 00:28:19.701 IEEE OUI Identifier: e4 d2 5c 00:28:19.701 Multi-path I/O 00:28:19.701 May have multiple subsystem ports: Yes 00:28:19.701 May have multiple controllers: Yes 00:28:19.701 Associated with SR-IOV VF: No 00:28:19.701 Max Data Transfer Size: 131072 00:28:19.701 Max Number of Namespaces: 32 00:28:19.701 Max Number of I/O Queues: 127 00:28:19.701 NVMe Specification Version (VS): 1.3 00:28:19.701 NVMe Specification Version (Identify): 1.3 00:28:19.701 Maximum Queue Entries: 128 00:28:19.701 Contiguous Queues Required: Yes 00:28:19.701 Arbitration Mechanisms Supported 00:28:19.701 Weighted Round Robin: Not Supported 00:28:19.701 Vendor Specific: Not Supported 00:28:19.701 Reset Timeout: 15000 ms 00:28:19.701 Doorbell Stride: 4 bytes 00:28:19.701 NVM Subsystem Reset: Not Supported 00:28:19.701 Command Sets Supported 00:28:19.701 NVM Command Set: Supported 00:28:19.701 Boot Partition: Not Supported 00:28:19.701 Memory Page Size Minimum: 4096 bytes 00:28:19.701 Memory Page Size Maximum: 4096 bytes 00:28:19.701 Persistent Memory Region: Not Supported 00:28:19.701 Optional Asynchronous Events Supported 00:28:19.701 Namespace Attribute Notices: Supported 00:28:19.701 Firmware Activation Notices: Not Supported 00:28:19.701 ANA Change Notices: Not Supported 00:28:19.701 PLE Aggregate Log Change Notices: Not Supported 00:28:19.701 LBA Status Info Alert Notices: Not Supported 00:28:19.701 EGE Aggregate Log Change Notices: Not Supported 00:28:19.701 Normal NVM Subsystem Shutdown event: Not Supported 00:28:19.701 Zone Descriptor Change Notices: Not Supported 00:28:19.701 Discovery Log Change Notices: Not Supported 00:28:19.701 Controller Attributes 00:28:19.701 128-bit Host Identifier: Supported 00:28:19.701 Non-Operational Permissive Mode: Not Supported 00:28:19.701 NVM Sets: Not Supported 00:28:19.701 Read Recovery Levels: Not Supported 00:28:19.701 Endurance Groups: Not Supported 00:28:19.701 Predictable Latency Mode: Not Supported 00:28:19.701 Traffic Based Keep ALive: Not Supported 00:28:19.701 Namespace Granularity: Not Supported 00:28:19.701 SQ Associations: Not Supported 00:28:19.701 UUID List: Not Supported 00:28:19.701 Multi-Domain Subsystem: Not Supported 00:28:19.701 Fixed Capacity Management: Not Supported 00:28:19.701 Variable Capacity Management: Not Supported 00:28:19.701 Delete Endurance Group: Not Supported 00:28:19.701 Delete NVM Set: Not Supported 00:28:19.701 Extended LBA Formats Supported: Not Supported 00:28:19.701 Flexible Data Placement Supported: Not Supported 00:28:19.701 00:28:19.701 Controller Memory Buffer Support 00:28:19.701 ================================ 00:28:19.701 Supported: No 00:28:19.701 00:28:19.701 Persistent Memory Region Support 00:28:19.701 ================================ 00:28:19.701 Supported: No 00:28:19.701 00:28:19.701 Admin Command Set Attributes 00:28:19.701 ============================ 00:28:19.701 Security Send/Receive: Not Supported 00:28:19.701 Format NVM: Not Supported 00:28:19.701 Firmware Activate/Download: Not Supported 00:28:19.701 Namespace Management: Not Supported 00:28:19.701 Device Self-Test: Not Supported 00:28:19.701 Directives: Not Supported 00:28:19.701 NVMe-MI: Not Supported 00:28:19.701 Virtualization Management: Not Supported 00:28:19.701 Doorbell Buffer Config: Not Supported 00:28:19.701 Get LBA Status Capability: Not Supported 00:28:19.701 Command & Feature Lockdown Capability: Not Supported 00:28:19.701 Abort Command Limit: 4 00:28:19.701 Async Event Request Limit: 4 00:28:19.701 Number of Firmware Slots: N/A 00:28:19.701 Firmware Slot 1 Read-Only: N/A 00:28:19.701 Firmware Activation Without Reset: N/A 00:28:19.701 Multiple Update Detection Support: N/A 00:28:19.701 Firmware Update Granularity: No Information Provided 00:28:19.701 Per-Namespace SMART Log: No 00:28:19.701 Asymmetric Namespace Access Log Page: Not Supported 00:28:19.701 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:19.701 Command Effects Log Page: Supported 00:28:19.701 Get Log Page Extended Data: Supported 00:28:19.701 Telemetry Log Pages: Not Supported 00:28:19.701 Persistent Event Log Pages: Not Supported 00:28:19.701 Supported Log Pages Log Page: May Support 00:28:19.701 Commands Supported & Effects Log Page: Not Supported 00:28:19.701 Feature Identifiers & Effects Log Page:May Support 00:28:19.701 NVMe-MI Commands & Effects Log Page: May Support 00:28:19.701 Data Area 4 for Telemetry Log: Not Supported 00:28:19.701 Error Log Page Entries Supported: 128 00:28:19.701 Keep Alive: Supported 00:28:19.701 Keep Alive Granularity: 10000 ms 00:28:19.701 00:28:19.701 NVM Command Set Attributes 00:28:19.701 ========================== 00:28:19.701 Submission Queue Entry Size 00:28:19.701 Max: 64 00:28:19.701 Min: 64 00:28:19.701 Completion Queue Entry Size 00:28:19.701 Max: 16 00:28:19.701 Min: 16 00:28:19.701 Number of Namespaces: 32 00:28:19.701 Compare Command: Supported 00:28:19.701 Write Uncorrectable Command: Not Supported 00:28:19.701 Dataset Management Command: Supported 00:28:19.701 Write Zeroes Command: Supported 00:28:19.701 Set Features Save Field: Not Supported 00:28:19.701 Reservations: Supported 00:28:19.701 Timestamp: Not Supported 00:28:19.701 Copy: Supported 00:28:19.701 Volatile Write Cache: Present 00:28:19.701 Atomic Write Unit (Normal): 1 00:28:19.701 Atomic Write Unit (PFail): 1 00:28:19.701 Atomic Compare & Write Unit: 1 00:28:19.701 Fused Compare & Write: Supported 00:28:19.701 Scatter-Gather List 00:28:19.701 SGL Command Set: Supported 00:28:19.701 SGL Keyed: Supported 00:28:19.701 SGL Bit Bucket Descriptor: Not Supported 00:28:19.701 SGL Metadata Pointer: Not Supported 00:28:19.701 Oversized SGL: Not Supported 00:28:19.701 SGL Metadata Address: Not Supported 00:28:19.701 SGL Offset: Supported 00:28:19.701 Transport SGL Data Block: Not Supported 00:28:19.702 Replay Protected Memory Block: Not Supported 00:28:19.702 00:28:19.702 Firmware Slot Information 00:28:19.702 ========================= 00:28:19.702 Active slot: 1 00:28:19.702 Slot 1 Firmware Revision: 24.09 00:28:19.702 00:28:19.702 00:28:19.702 Commands Supported and Effects 00:28:19.702 ============================== 00:28:19.702 Admin Commands 00:28:19.702 -------------- 00:28:19.702 Get Log Page (02h): Supported 00:28:19.702 Identify (06h): Supported 00:28:19.702 Abort (08h): Supported 00:28:19.702 Set Features (09h): Supported 00:28:19.702 Get Features (0Ah): Supported 00:28:19.702 Asynchronous Event Request (0Ch): Supported 00:28:19.702 Keep Alive (18h): Supported 00:28:19.702 I/O Commands 00:28:19.702 ------------ 00:28:19.702 Flush (00h): Supported LBA-Change 00:28:19.702 Write (01h): Supported LBA-Change 00:28:19.702 Read (02h): Supported 00:28:19.702 Compare (05h): Supported 00:28:19.702 Write Zeroes (08h): Supported LBA-Change 00:28:19.702 Dataset Management (09h): Supported LBA-Change 00:28:19.702 Copy (19h): Supported LBA-Change 00:28:19.702 00:28:19.702 Error Log 00:28:19.702 ========= 00:28:19.702 00:28:19.702 Arbitration 00:28:19.702 =========== 00:28:19.702 Arbitration Burst: 1 00:28:19.702 00:28:19.702 Power Management 00:28:19.702 ================ 00:28:19.702 Number of Power States: 1 00:28:19.702 Current Power State: Power State #0 00:28:19.702 Power State #0: 00:28:19.702 Max Power: 0.00 W 00:28:19.702 Non-Operational State: Operational 00:28:19.702 Entry Latency: Not Reported 00:28:19.702 Exit Latency: Not Reported 00:28:19.702 Relative Read Throughput: 0 00:28:19.702 Relative Read Latency: 0 00:28:19.702 Relative Write Throughput: 0 00:28:19.702 Relative Write Latency: 0 00:28:19.702 Idle Power: Not Reported 00:28:19.702 Active Power: Not Reported 00:28:19.702 Non-Operational Permissive Mode: Not Supported 00:28:19.702 00:28:19.702 Health Information 00:28:19.702 ================== 00:28:19.702 Critical Warnings: 00:28:19.702 Available Spare Space: OK 00:28:19.702 Temperature: OK 00:28:19.702 Device Reliability: OK 00:28:19.702 Read Only: No 00:28:19.702 Volatile Memory Backup: OK 00:28:19.702 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:19.702 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:19.702 Available Spare: 0% 00:28:19.702 Available Spare Threshold: 0% 00:28:19.702 Life Percentage Used:[2024-07-25 05:48:13.226479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.226491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa6cae0) 00:28:19.702 [2024-07-25 05:48:13.226502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.702 [2024-07-25 05:48:13.226525] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac3cc0, cid 7, qid 0 00:28:19.702 [2024-07-25 05:48:13.226686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.702 [2024-07-25 05:48:13.226701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.702 [2024-07-25 05:48:13.226708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.226715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3cc0) on tqpair=0xa6cae0 00:28:19.702 [2024-07-25 05:48:13.226757] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:19.702 [2024-07-25 05:48:13.226776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3240) on tqpair=0xa6cae0 00:28:19.702 [2024-07-25 05:48:13.226787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.702 [2024-07-25 05:48:13.226796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac33c0) on tqpair=0xa6cae0 00:28:19.702 [2024-07-25 05:48:13.226803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.702 [2024-07-25 05:48:13.226812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac3540) on tqpair=0xa6cae0 00:28:19.702 [2024-07-25 05:48:13.226819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.702 [2024-07-25 05:48:13.226827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac36c0) on tqpair=0xa6cae0 00:28:19.702 [2024-07-25 05:48:13.226835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.702 [2024-07-25 05:48:13.226863] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.226871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.226877] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6cae0) 00:28:19.702 [2024-07-25 05:48:13.226888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.702 [2024-07-25 05:48:13.226910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac36c0, cid 3, qid 0 00:28:19.702 [2024-07-25 05:48:13.227039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.702 [2024-07-25 05:48:13.227054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.702 [2024-07-25 05:48:13.227061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.227068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac36c0) on tqpair=0xa6cae0 00:28:19.702 [2024-07-25 05:48:13.227079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.227087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.227093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6cae0) 00:28:19.702 [2024-07-25 05:48:13.227104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.702 [2024-07-25 05:48:13.227130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac36c0, cid 3, qid 0 00:28:19.702 [2024-07-25 05:48:13.227271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.702 [2024-07-25 05:48:13.227291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.702 [2024-07-25 05:48:13.227299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.227306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac36c0) on tqpair=0xa6cae0 00:28:19.702 [2024-07-25 05:48:13.227314] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:19.702 [2024-07-25 05:48:13.227322] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:19.702 [2024-07-25 05:48:13.227339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.227348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.227354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6cae0) 00:28:19.702 [2024-07-25 05:48:13.227365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.702 [2024-07-25 05:48:13.227387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac36c0, cid 3, qid 0 00:28:19.702 [2024-07-25 05:48:13.227514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.702 [2024-07-25 05:48:13.227531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.702 [2024-07-25 05:48:13.227538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.227545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac36c0) on tqpair=0xa6cae0 00:28:19.702 [2024-07-25 05:48:13.227562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.227572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.227579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6cae0) 00:28:19.702 [2024-07-25 05:48:13.227589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.702 [2024-07-25 05:48:13.227611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac36c0, cid 3, qid 0 00:28:19.702 [2024-07-25 05:48:13.227733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.702 [2024-07-25 05:48:13.227749] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.702 [2024-07-25 05:48:13.227756] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.227763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac36c0) on tqpair=0xa6cae0 00:28:19.702 [2024-07-25 05:48:13.227780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.227790] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.227797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6cae0) 00:28:19.702 [2024-07-25 05:48:13.227807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.702 [2024-07-25 05:48:13.227829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac36c0, cid 3, qid 0 00:28:19.702 [2024-07-25 05:48:13.227945] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.702 [2024-07-25 05:48:13.227959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.702 [2024-07-25 05:48:13.227967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.227974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac36c0) on tqpair=0xa6cae0 00:28:19.702 [2024-07-25 05:48:13.227990] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.228000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.702 [2024-07-25 05:48:13.228006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6cae0) 00:28:19.702 [2024-07-25 05:48:13.228017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.702 [2024-07-25 05:48:13.228039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac36c0, cid 3, qid 0 00:28:19.703 [2024-07-25 05:48:13.228158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.703 [2024-07-25 05:48:13.228174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.703 [2024-07-25 05:48:13.228182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.703 [2024-07-25 05:48:13.228188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac36c0) on tqpair=0xa6cae0 00:28:19.703 [2024-07-25 05:48:13.228206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:19.703 [2024-07-25 05:48:13.228216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:19.703 [2024-07-25 05:48:13.228223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6cae0) 00:28:19.703 [2024-07-25 05:48:13.228234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.703 [2024-07-25 05:48:13.232267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac36c0, cid 3, qid 0 00:28:19.703 [2024-07-25 05:48:13.232406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:19.703 [2024-07-25 05:48:13.232421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:19.703 [2024-07-25 05:48:13.232429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:19.703 [2024-07-25 05:48:13.232435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xac36c0) on tqpair=0xa6cae0 00:28:19.703 [2024-07-25 05:48:13.232449] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:28:19.703 0% 00:28:19.703 Data Units Read: 0 00:28:19.703 Data Units Written: 0 00:28:19.703 Host Read Commands: 0 00:28:19.703 Host Write Commands: 0 00:28:19.703 Controller Busy Time: 0 minutes 00:28:19.703 Power Cycles: 0 00:28:19.703 Power On Hours: 0 hours 00:28:19.703 Unsafe Shutdowns: 0 00:28:19.703 Unrecoverable Media Errors: 0 00:28:19.703 Lifetime Error Log Entries: 0 00:28:19.703 Warning Temperature Time: 0 minutes 00:28:19.703 Critical Temperature Time: 0 minutes 00:28:19.703 00:28:19.703 Number of Queues 00:28:19.703 ================ 00:28:19.703 Number of I/O Submission Queues: 127 00:28:19.703 Number of I/O Completion Queues: 127 00:28:19.703 00:28:19.703 Active Namespaces 00:28:19.703 ================= 00:28:19.703 Namespace ID:1 00:28:19.703 Error Recovery Timeout: Unlimited 00:28:19.703 Command Set Identifier: NVM (00h) 00:28:19.703 Deallocate: Supported 00:28:19.703 Deallocated/Unwritten Error: Not Supported 00:28:19.703 Deallocated Read Value: Unknown 00:28:19.703 Deallocate in Write Zeroes: Not Supported 00:28:19.703 Deallocated Guard Field: 0xFFFF 00:28:19.703 Flush: Supported 00:28:19.703 Reservation: Supported 00:28:19.703 Namespace Sharing Capabilities: Multiple Controllers 00:28:19.703 Size (in LBAs): 131072 (0GiB) 00:28:19.703 Capacity (in LBAs): 131072 (0GiB) 00:28:19.703 Utilization (in LBAs): 131072 (0GiB) 00:28:19.703 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:19.703 EUI64: ABCDEF0123456789 00:28:19.703 UUID: 4dea58e9-9078-49d3-b4e7-47ece7384a88 00:28:19.703 Thin Provisioning: Not Supported 00:28:19.703 Per-NS Atomic Units: Yes 00:28:19.703 Atomic Boundary Size (Normal): 0 00:28:19.703 Atomic Boundary Size (PFail): 0 00:28:19.703 Atomic Boundary Offset: 0 00:28:19.703 Maximum Single Source Range Length: 65535 00:28:19.703 Maximum Copy Length: 65535 00:28:19.703 Maximum Source Range Count: 1 00:28:19.703 NGUID/EUI64 Never Reused: No 00:28:19.703 Namespace Write Protected: No 00:28:19.703 Number of LBA Formats: 1 00:28:19.703 Current LBA Format: LBA Format #00 00:28:19.703 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:19.703 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:19.703 rmmod nvme_tcp 00:28:19.703 rmmod nvme_fabrics 00:28:19.703 rmmod nvme_keyring 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1719378 ']' 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1719378 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1719378 ']' 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1719378 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1719378 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1719378' 00:28:19.703 killing process with pid 1719378 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1719378 00:28:19.703 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1719378 00:28:19.960 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:19.960 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:19.960 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:19.960 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:19.960 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:19.960 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.960 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.960 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:22.485 00:28:22.485 real 0m5.325s 00:28:22.485 user 0m4.540s 00:28:22.485 sys 0m1.806s 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.485 ************************************ 00:28:22.485 END TEST nvmf_identify 00:28:22.485 ************************************ 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.485 ************************************ 00:28:22.485 START TEST nvmf_perf 00:28:22.485 ************************************ 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:22.485 * Looking for test storage... 00:28:22.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.485 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:22.486 05:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:24.383 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:24.383 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:24.383 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.383 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:24.384 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:24.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:28:24.384 00:28:24.384 --- 10.0.0.2 ping statistics --- 00:28:24.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.384 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:28:24.384 00:28:24.384 --- 10.0.0.1 ping statistics --- 00:28:24.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.384 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1721452 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1721452 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1721452 ']' 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.384 05:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:24.384 [2024-07-25 05:48:18.021197] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:28:24.384 [2024-07-25 05:48:18.021324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.384 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.642 [2024-07-25 05:48:18.091556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:24.642 [2024-07-25 05:48:18.181255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.642 [2024-07-25 05:48:18.181311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.642 [2024-07-25 05:48:18.181339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.642 [2024-07-25 05:48:18.181351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.642 [2024-07-25 05:48:18.181361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.642 [2024-07-25 05:48:18.181431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.642 [2024-07-25 05:48:18.181456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:24.642 [2024-07-25 05:48:18.181514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:24.642 [2024-07-25 05:48:18.181517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.642 05:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.642 05:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:28:24.642 05:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:24.642 05:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:24.642 05:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:24.642 05:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.642 05:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:24.642 05:48:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:27.949 05:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:27.949 05:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:28.206 05:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:28.206 05:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:28.464 05:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:28.464 05:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:28.464 05:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:28.464 05:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:28.464 05:48:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:28.722 [2024-07-25 05:48:22.181558] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.722 05:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:28.979 05:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:28.979 05:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:29.237 05:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:29.237 05:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:29.494 05:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.494 [2024-07-25 05:48:23.177232] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.494 05:48:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:29.751 05:48:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:29.751 05:48:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:29.751 05:48:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:29.751 05:48:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:31.122 Initializing NVMe Controllers 00:28:31.122 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:31.122 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:31.122 Initialization complete. Launching workers. 00:28:31.122 ======================================================== 00:28:31.122 Latency(us) 00:28:31.122 Device Information : IOPS MiB/s Average min max 00:28:31.122 PCIE (0000:88:00.0) NSID 1 from core 0: 85302.04 333.21 374.43 11.76 4584.10 00:28:31.122 ======================================================== 00:28:31.122 Total : 85302.04 333.21 374.43 11.76 4584.10 00:28:31.122 00:28:31.122 05:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:31.122 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.494 Initializing NVMe Controllers 00:28:32.494 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:32.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:32.494 Initialization complete. Launching workers. 00:28:32.494 ======================================================== 00:28:32.494 Latency(us) 00:28:32.494 Device Information : IOPS MiB/s Average min max 00:28:32.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 132.00 0.52 7739.50 174.39 45355.77 00:28:32.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.00 0.21 19044.98 7941.34 47921.11 00:28:32.494 ======================================================== 00:28:32.494 Total : 186.00 0.73 11021.74 174.39 47921.11 00:28:32.494 00:28:32.494 05:48:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:32.494 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.866 Initializing NVMe Controllers 00:28:33.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:33.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:33.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:33.866 Initialization complete. Launching workers. 00:28:33.866 ======================================================== 00:28:33.866 Latency(us) 00:28:33.866 Device Information : IOPS MiB/s Average min max 00:28:33.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8470.98 33.09 3787.69 514.05 7549.32 00:28:33.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3890.99 15.20 8259.78 6760.81 16302.55 00:28:33.866 ======================================================== 00:28:33.866 Total : 12361.98 48.29 5195.30 514.05 16302.55 00:28:33.866 00:28:33.866 05:48:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:33.866 05:48:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:33.866 05:48:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:33.866 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.394 Initializing NVMe Controllers 00:28:36.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.394 Controller IO queue size 128, less than required. 00:28:36.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.394 Controller IO queue size 128, less than required. 00:28:36.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:36.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:36.394 Initialization complete. Launching workers. 00:28:36.394 ======================================================== 00:28:36.394 Latency(us) 00:28:36.394 Device Information : IOPS MiB/s Average min max 00:28:36.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1042.15 260.54 125833.50 74285.70 195234.03 00:28:36.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 560.81 140.20 240312.32 127267.45 366225.36 00:28:36.394 ======================================================== 00:28:36.394 Total : 1602.96 400.74 165885.02 74285.70 366225.36 00:28:36.394 00:28:36.651 05:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:36.651 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.652 No valid NVMe controllers or AIO or URING devices found 00:28:36.652 Initializing NVMe Controllers 00:28:36.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.652 Controller IO queue size 128, less than required. 00:28:36.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.652 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:36.652 Controller IO queue size 128, less than required. 00:28:36.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.652 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:36.652 WARNING: Some requested NVMe devices were skipped 00:28:36.909 05:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:36.909 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.451 Initializing NVMe Controllers 00:28:39.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.451 Controller IO queue size 128, less than required. 00:28:39.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:39.451 Controller IO queue size 128, less than required. 00:28:39.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:39.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:39.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:39.451 Initialization complete. Launching workers. 00:28:39.451 00:28:39.451 ==================== 00:28:39.451 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:39.451 TCP transport: 00:28:39.451 polls: 25819 00:28:39.451 idle_polls: 11346 00:28:39.451 sock_completions: 14473 00:28:39.451 nvme_completions: 4859 00:28:39.451 submitted_requests: 7298 00:28:39.451 queued_requests: 1 00:28:39.451 00:28:39.451 ==================== 00:28:39.451 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:39.451 TCP transport: 00:28:39.451 polls: 25946 00:28:39.451 idle_polls: 10718 00:28:39.451 sock_completions: 15228 00:28:39.451 nvme_completions: 3899 00:28:39.451 submitted_requests: 5886 00:28:39.451 queued_requests: 1 00:28:39.451 ======================================================== 00:28:39.451 Latency(us) 00:28:39.451 Device Information : IOPS MiB/s Average min max 00:28:39.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1212.55 303.14 108519.52 55054.32 162087.16 00:28:39.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 972.94 243.23 133591.93 57721.03 182312.88 00:28:39.451 ======================================================== 00:28:39.451 Total : 2185.49 546.37 119681.27 55054.32 182312.88 00:28:39.451 00:28:39.451 05:48:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:39.451 05:48:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:39.709 05:48:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:39.709 05:48:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:39.709 05:48:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:42.985 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=e296b839-9f60-4a16-9be4-072ed0ff355b 00:28:42.985 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb e296b839-9f60-4a16-9be4-072ed0ff355b 00:28:42.985 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=e296b839-9f60-4a16-9be4-072ed0ff355b 00:28:42.985 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:42.985 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:42.985 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:42.985 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:43.242 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:43.242 { 00:28:43.242 "uuid": "e296b839-9f60-4a16-9be4-072ed0ff355b", 00:28:43.242 "name": "lvs_0", 00:28:43.242 "base_bdev": "Nvme0n1", 00:28:43.242 "total_data_clusters": 238234, 00:28:43.242 "free_clusters": 238234, 00:28:43.242 "block_size": 512, 00:28:43.242 "cluster_size": 4194304 00:28:43.242 } 00:28:43.242 ]' 00:28:43.242 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e296b839-9f60-4a16-9be4-072ed0ff355b") .free_clusters' 00:28:43.242 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:43.242 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e296b839-9f60-4a16-9be4-072ed0ff355b") .cluster_size' 00:28:43.242 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:43.242 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:43.242 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:43.242 952936 00:28:43.242 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:43.242 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:43.242 05:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e296b839-9f60-4a16-9be4-072ed0ff355b lbd_0 20480 00:28:43.807 05:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=7d978267-f14b-4f22-84bf-9834ee955e4f 00:28:43.807 05:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 7d978267-f14b-4f22-84bf-9834ee955e4f lvs_n_0 00:28:44.768 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=69fff5b1-362b-4766-b06a-d9465fcbd8df 00:28:44.768 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 69fff5b1-362b-4766-b06a-d9465fcbd8df 00:28:44.768 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=69fff5b1-362b-4766-b06a-d9465fcbd8df 00:28:44.768 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:44.768 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:44.768 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:44.768 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:45.029 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:45.029 { 00:28:45.029 "uuid": "e296b839-9f60-4a16-9be4-072ed0ff355b", 00:28:45.029 "name": "lvs_0", 00:28:45.029 "base_bdev": "Nvme0n1", 00:28:45.029 "total_data_clusters": 238234, 00:28:45.029 "free_clusters": 233114, 00:28:45.029 "block_size": 512, 00:28:45.029 "cluster_size": 4194304 00:28:45.029 }, 00:28:45.029 { 00:28:45.029 "uuid": "69fff5b1-362b-4766-b06a-d9465fcbd8df", 00:28:45.029 "name": "lvs_n_0", 00:28:45.029 "base_bdev": "7d978267-f14b-4f22-84bf-9834ee955e4f", 00:28:45.029 "total_data_clusters": 5114, 00:28:45.029 "free_clusters": 5114, 00:28:45.029 "block_size": 512, 00:28:45.029 "cluster_size": 4194304 00:28:45.029 } 00:28:45.029 ]' 00:28:45.029 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="69fff5b1-362b-4766-b06a-d9465fcbd8df") .free_clusters' 00:28:45.029 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:28:45.029 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="69fff5b1-362b-4766-b06a-d9465fcbd8df") .cluster_size' 00:28:45.029 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:45.029 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:28:45.029 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:28:45.029 20456 00:28:45.029 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:45.029 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 69fff5b1-362b-4766-b06a-d9465fcbd8df lbd_nest_0 20456 00:28:45.286 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=bf038ee3-e8e5-488a-b38c-11c9605f9bbd 00:28:45.286 05:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.543 05:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:45.543 05:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bf038ee3-e8e5-488a-b38c-11c9605f9bbd 00:28:45.802 05:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.059 05:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:46.059 05:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:46.059 05:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:46.059 05:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:46.059 05:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:46.059 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.247 Initializing NVMe Controllers 00:28:58.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:58.247 Initialization complete. Launching workers. 00:28:58.247 ======================================================== 00:28:58.247 Latency(us) 00:28:58.247 Device Information : IOPS MiB/s Average min max 00:28:58.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.30 0.02 22594.49 217.81 46001.90 00:28:58.247 ======================================================== 00:28:58.247 Total : 44.30 0.02 22594.49 217.81 46001.90 00:28:58.247 00:28:58.247 05:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:58.247 05:48:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:58.247 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.205 Initializing NVMe Controllers 00:29:08.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:08.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:08.205 Initialization complete. Launching workers. 00:29:08.205 ======================================================== 00:29:08.205 Latency(us) 00:29:08.205 Device Information : IOPS MiB/s Average min max 00:29:08.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.19 9.65 12962.52 4987.88 47903.16 00:29:08.205 ======================================================== 00:29:08.205 Total : 77.19 9.65 12962.52 4987.88 47903.16 00:29:08.205 00:29:08.205 05:49:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:08.205 05:49:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:08.205 05:49:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:08.205 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.161 Initializing NVMe Controllers 00:29:18.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:18.162 Initialization complete. Launching workers. 00:29:18.162 ======================================================== 00:29:18.162 Latency(us) 00:29:18.162 Device Information : IOPS MiB/s Average min max 00:29:18.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6510.94 3.18 4915.64 241.89 11771.51 00:29:18.162 ======================================================== 00:29:18.162 Total : 6510.94 3.18 4915.64 241.89 11771.51 00:29:18.162 00:29:18.162 05:49:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:18.162 05:49:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.162 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.196 Initializing NVMe Controllers 00:29:28.196 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:28.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:28.196 Initialization complete. Launching workers. 00:29:28.196 ======================================================== 00:29:28.196 Latency(us) 00:29:28.196 Device Information : IOPS MiB/s Average min max 00:29:28.196 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2353.50 294.19 13606.22 871.65 30121.25 00:29:28.196 ======================================================== 00:29:28.196 Total : 2353.50 294.19 13606.22 871.65 30121.25 00:29:28.196 00:29:28.196 05:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:28.196 05:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:28.196 05:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:28.196 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.154 Initializing NVMe Controllers 00:29:38.154 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.154 Controller IO queue size 128, less than required. 00:29:38.154 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:38.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:38.154 Initialization complete. Launching workers. 00:29:38.154 ======================================================== 00:29:38.154 Latency(us) 00:29:38.154 Device Information : IOPS MiB/s Average min max 00:29:38.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11551.86 5.64 11083.87 1706.63 25406.18 00:29:38.154 ======================================================== 00:29:38.154 Total : 11551.86 5.64 11083.87 1706.63 25406.18 00:29:38.154 00:29:38.154 05:49:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:38.154 05:49:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.154 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.119 Initializing NVMe Controllers 00:29:48.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:48.119 Controller IO queue size 128, less than required. 00:29:48.119 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:48.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:48.119 Initialization complete. Launching workers. 00:29:48.119 ======================================================== 00:29:48.119 Latency(us) 00:29:48.119 Device Information : IOPS MiB/s Average min max 00:29:48.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1214.00 151.75 106223.27 15515.61 214856.92 00:29:48.119 ======================================================== 00:29:48.119 Total : 1214.00 151.75 106223.27 15515.61 214856.92 00:29:48.119 00:29:48.119 05:49:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.377 05:49:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bf038ee3-e8e5-488a-b38c-11c9605f9bbd 00:29:49.310 05:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:49.310 05:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7d978267-f14b-4f22-84bf-9834ee955e4f 00:29:49.874 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:49.874 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:49.874 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:49.874 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:49.874 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:49.874 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:49.874 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:49.874 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:49.874 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:49.874 rmmod nvme_tcp 00:29:49.874 rmmod nvme_fabrics 00:29:50.130 rmmod nvme_keyring 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1721452 ']' 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1721452 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1721452 ']' 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1721452 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1721452 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1721452' 00:29:50.130 killing process with pid 1721452 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1721452 00:29:50.130 05:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1721452 00:29:52.028 05:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:52.028 05:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:52.028 05:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:52.028 05:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:52.028 05:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:52.028 05:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.028 05:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.028 05:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.928 05:49:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:53.928 00:29:53.928 real 1m31.618s 00:29:53.928 user 5m30.502s 00:29:53.928 sys 0m16.899s 00:29:53.928 05:49:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:53.928 05:49:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:53.928 ************************************ 00:29:53.928 END TEST nvmf_perf 00:29:53.928 ************************************ 00:29:53.928 05:49:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:53.928 05:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:53.928 05:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:53.928 05:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.928 ************************************ 00:29:53.928 START TEST nvmf_fio_host 00:29:53.928 ************************************ 00:29:53.928 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:53.928 * Looking for test storage... 00:29:53.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:53.928 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.928 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.928 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.928 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.928 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:53.929 05:49:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:55.862 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:55.862 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:55.862 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:55.862 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:55.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:55.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:29:55.862 00:29:55.862 --- 10.0.0.2 ping statistics --- 00:29:55.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.862 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:55.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:55.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:29:55.862 00:29:55.862 --- 10.0.0.1 ping statistics --- 00:29:55.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.862 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:55.862 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1733532 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1733532 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1733532 ']' 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:55.863 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.121 [2024-07-25 05:49:49.568321] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:29:56.121 [2024-07-25 05:49:49.568402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.121 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.121 [2024-07-25 05:49:49.638049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.121 [2024-07-25 05:49:49.729207] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.121 [2024-07-25 05:49:49.729288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.121 [2024-07-25 05:49:49.729306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.121 [2024-07-25 05:49:49.729320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.121 [2024-07-25 05:49:49.729332] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.121 [2024-07-25 05:49:49.729396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.121 [2024-07-25 05:49:49.729466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.121 [2024-07-25 05:49:49.729557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:56.121 [2024-07-25 05:49:49.729560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.379 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:56.379 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:29:56.379 05:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:56.637 [2024-07-25 05:49:50.120695] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.637 05:49:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:56.637 05:49:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:56.637 05:49:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.637 05:49:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:56.895 Malloc1 00:29:56.895 05:49:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:57.151 05:49:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:57.408 05:49:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:57.666 [2024-07-25 05:49:51.150586] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.666 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:57.924 05:49:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:58.182 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:58.182 fio-3.35 00:29:58.182 Starting 1 thread 00:29:58.182 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.707 00:30:00.707 test: (groupid=0, jobs=1): err= 0: pid=1733889: Thu Jul 25 05:49:53 2024 00:30:00.707 read: IOPS=9179, BW=35.9MiB/s (37.6MB/s)(71.9MiB/2006msec) 00:30:00.707 slat (usec): min=2, max=135, avg= 2.70, stdev= 1.73 00:30:00.707 clat (usec): min=2597, max=12980, avg=7697.73, stdev=578.34 00:30:00.707 lat (usec): min=2623, max=12982, avg=7700.43, stdev=578.24 00:30:00.707 clat percentiles (usec): 00:30:00.708 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:30:00.708 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:30:00.708 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8356], 95.00th=[ 8586], 00:30:00.708 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11076], 99.95th=[12387], 00:30:00.708 | 99.99th=[12911] 00:30:00.708 bw ( KiB/s): min=35808, max=37208, per=99.93%, avg=36692.00, stdev=610.19, samples=4 00:30:00.708 iops : min= 8952, max= 9302, avg=9173.00, stdev=152.55, samples=4 00:30:00.708 write: IOPS=9187, BW=35.9MiB/s (37.6MB/s)(72.0MiB/2006msec); 0 zone resets 00:30:00.708 slat (usec): min=2, max=106, avg= 2.80, stdev= 1.40 00:30:00.708 clat (usec): min=1301, max=12291, avg=6194.09, stdev=507.84 00:30:00.708 lat (usec): min=1308, max=12293, avg=6196.89, stdev=507.79 00:30:00.708 clat percentiles (usec): 00:30:00.708 | 1.00th=[ 5080], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5800], 00:30:00.708 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6325], 00:30:00.708 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6915], 00:30:00.708 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[10290], 99.95th=[11207], 00:30:00.708 | 99.99th=[12256] 00:30:00.708 bw ( KiB/s): min=36544, max=36928, per=99.98%, avg=36744.00, stdev=165.51, samples=4 00:30:00.708 iops : min= 9136, max= 9232, avg=9186.00, stdev=41.38, samples=4 00:30:00.708 lat (msec) : 2=0.03%, 4=0.09%, 10=99.75%, 20=0.13% 00:30:00.708 cpu : usr=58.90%, sys=35.96%, ctx=39, majf=0, minf=38 00:30:00.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:00.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:00.708 issued rwts: total=18414,18431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.708 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:00.708 00:30:00.708 Run status group 0 (all jobs): 00:30:00.708 READ: bw=35.9MiB/s (37.6MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=71.9MiB (75.4MB), run=2006-2006msec 00:30:00.708 WRITE: bw=35.9MiB/s (37.6MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=72.0MiB (75.5MB), run=2006-2006msec 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:00.708 05:49:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:00.708 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:00.708 fio-3.35 00:30:00.708 Starting 1 thread 00:30:00.708 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.234 00:30:03.234 test: (groupid=0, jobs=1): err= 0: pid=1734232: Thu Jul 25 05:49:56 2024 00:30:03.234 read: IOPS=7843, BW=123MiB/s (129MB/s)(246MiB/2010msec) 00:30:03.234 slat (nsec): min=2818, max=95529, avg=3649.48, stdev=1549.30 00:30:03.234 clat (usec): min=2234, max=18672, avg=9482.29, stdev=2250.75 00:30:03.234 lat (usec): min=2237, max=18676, avg=9485.94, stdev=2250.76 00:30:03.234 clat percentiles (usec): 00:30:03.234 | 1.00th=[ 4883], 5.00th=[ 5866], 10.00th=[ 6652], 20.00th=[ 7570], 00:30:03.234 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[ 9896], 00:30:03.234 | 70.00th=[10290], 80.00th=[11076], 90.00th=[12518], 95.00th=[13566], 00:30:03.234 | 99.00th=[15139], 99.50th=[15926], 99.90th=[16712], 99.95th=[16909], 00:30:03.234 | 99.99th=[18482] 00:30:03.234 bw ( KiB/s): min=54432, max=71840, per=51.81%, avg=65016.00, stdev=7625.47, samples=4 00:30:03.234 iops : min= 3402, max= 4490, avg=4063.50, stdev=476.59, samples=4 00:30:03.234 write: IOPS=4643, BW=72.5MiB/s (76.1MB/s)(133MiB/1836msec); 0 zone resets 00:30:03.234 slat (usec): min=30, max=161, avg=33.06, stdev= 4.31 00:30:03.234 clat (usec): min=4200, max=19872, avg=11820.50, stdev=2271.36 00:30:03.234 lat (usec): min=4234, max=19902, avg=11853.56, stdev=2271.48 00:30:03.234 clat percentiles (usec): 00:30:03.234 | 1.00th=[ 7701], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:30:03.234 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11600], 60.00th=[12125], 00:30:03.234 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15008], 95.00th=[15926], 00:30:03.234 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19268], 99.95th=[19530], 00:30:03.234 | 99.99th=[19792] 00:30:03.234 bw ( KiB/s): min=56512, max=74848, per=91.43%, avg=67928.00, stdev=8174.37, samples=4 00:30:03.234 iops : min= 3532, max= 4678, avg=4245.50, stdev=510.90, samples=4 00:30:03.234 lat (msec) : 4=0.19%, 10=49.02%, 20=50.78% 00:30:03.234 cpu : usr=72.67%, sys=24.14%, ctx=48, majf=0, minf=62 00:30:03.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:03.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:03.234 issued rwts: total=15766,8525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:03.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:03.234 00:30:03.234 Run status group 0 (all jobs): 00:30:03.234 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=246MiB (258MB), run=2010-2010msec 00:30:03.234 WRITE: bw=72.5MiB/s (76.1MB/s), 72.5MiB/s-72.5MiB/s (76.1MB/s-76.1MB/s), io=133MiB (140MB), run=1836-1836msec 00:30:03.234 05:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:03.234 05:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:03.234 05:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:03.234 05:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:03.234 05:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:03.234 05:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:30:03.234 05:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:03.234 05:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:03.234 05:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:03.234 05:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:03.234 05:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:30:03.234 05:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:06.511 Nvme0n1 00:30:06.511 05:49:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:09.784 05:50:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=610e809b-4a85-4009-b184-7926a29c4da0 00:30:09.784 05:50:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 610e809b-4a85-4009-b184-7926a29c4da0 00:30:09.784 05:50:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=610e809b-4a85-4009-b184-7926a29c4da0 00:30:09.784 05:50:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:09.784 05:50:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:09.784 05:50:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:09.784 05:50:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:09.784 05:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:09.784 { 00:30:09.784 "uuid": "610e809b-4a85-4009-b184-7926a29c4da0", 00:30:09.784 "name": "lvs_0", 00:30:09.784 "base_bdev": "Nvme0n1", 00:30:09.784 "total_data_clusters": 930, 00:30:09.784 "free_clusters": 930, 00:30:09.784 "block_size": 512, 00:30:09.784 "cluster_size": 1073741824 00:30:09.784 } 00:30:09.784 ]' 00:30:09.784 05:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="610e809b-4a85-4009-b184-7926a29c4da0") .free_clusters' 00:30:09.784 05:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:30:09.784 05:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="610e809b-4a85-4009-b184-7926a29c4da0") .cluster_size' 00:30:09.784 05:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:30:09.784 05:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:30:09.784 05:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:30:09.784 952320 00:30:09.784 05:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:09.784 dc3a4e19-98ed-4adf-8ad1-f48631d1fa83 00:30:10.041 05:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:10.297 05:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:10.553 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:10.553 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:10.553 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:10.553 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:10.553 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:10.553 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:10.553 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.553 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:10.553 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:10.553 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.553 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.553 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:10.553 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:10.810 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:10.810 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:10.810 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.810 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.810 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:10.810 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:10.810 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:10.810 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:10.810 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:10.810 05:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:10.810 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:10.810 fio-3.35 00:30:10.810 Starting 1 thread 00:30:10.810 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.335 00:30:13.335 test: (groupid=0, jobs=1): err= 0: pid=1735508: Thu Jul 25 05:50:06 2024 00:30:13.335 read: IOPS=5789, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2007msec) 00:30:13.335 slat (nsec): min=1938, max=193469, avg=2552.97, stdev=2676.97 00:30:13.335 clat (usec): min=904, max=171731, avg=12162.63, stdev=11825.09 00:30:13.335 lat (usec): min=909, max=171774, avg=12165.18, stdev=11825.55 00:30:13.335 clat percentiles (msec): 00:30:13.335 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:30:13.335 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:30:13.335 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:30:13.335 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:13.335 | 99.99th=[ 171] 00:30:13.335 bw ( KiB/s): min=15840, max=25648, per=99.66%, avg=23080.00, stdev=4830.89, samples=4 00:30:13.335 iops : min= 3960, max= 6412, avg=5770.00, stdev=1207.72, samples=4 00:30:13.335 write: IOPS=5769, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2007msec); 0 zone resets 00:30:13.335 slat (usec): min=2, max=162, avg= 2.66, stdev= 1.93 00:30:13.335 clat (usec): min=372, max=169308, avg=9807.15, stdev=11107.95 00:30:13.335 lat (usec): min=376, max=169316, avg=9809.82, stdev=11108.42 00:30:13.335 clat percentiles (msec): 00:30:13.335 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:30:13.335 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 10], 00:30:13.335 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:30:13.335 | 99.00th=[ 12], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:30:13.335 | 99.99th=[ 169] 00:30:13.335 bw ( KiB/s): min=16808, max=25472, per=99.94%, avg=23066.00, stdev=4180.17, samples=4 00:30:13.335 iops : min= 4202, max= 6368, avg=5766.50, stdev=1045.04, samples=4 00:30:13.335 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:13.335 lat (msec) : 2=0.03%, 4=0.11%, 10=48.93%, 20=50.35%, 250=0.55% 00:30:13.335 cpu : usr=57.48%, sys=38.93%, ctx=101, majf=0, minf=38 00:30:13.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:13.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:13.335 issued rwts: total=11620,11580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:13.335 00:30:13.335 Run status group 0 (all jobs): 00:30:13.335 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2007-2007msec 00:30:13.335 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2007-2007msec 00:30:13.335 05:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:13.335 05:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:14.738 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=42dc27d8-a50d-41dd-bff4-7f42ef7b3504 00:30:14.738 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 42dc27d8-a50d-41dd-bff4-7f42ef7b3504 00:30:14.738 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=42dc27d8-a50d-41dd-bff4-7f42ef7b3504 00:30:14.738 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:14.738 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:14.738 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:14.738 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:15.000 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:15.000 { 00:30:15.000 "uuid": "610e809b-4a85-4009-b184-7926a29c4da0", 00:30:15.000 "name": "lvs_0", 00:30:15.000 "base_bdev": "Nvme0n1", 00:30:15.000 "total_data_clusters": 930, 00:30:15.000 "free_clusters": 0, 00:30:15.000 "block_size": 512, 00:30:15.000 "cluster_size": 1073741824 00:30:15.000 }, 00:30:15.000 { 00:30:15.000 "uuid": "42dc27d8-a50d-41dd-bff4-7f42ef7b3504", 00:30:15.000 "name": "lvs_n_0", 00:30:15.000 "base_bdev": "dc3a4e19-98ed-4adf-8ad1-f48631d1fa83", 00:30:15.000 "total_data_clusters": 237847, 00:30:15.000 "free_clusters": 237847, 00:30:15.000 "block_size": 512, 00:30:15.000 "cluster_size": 4194304 00:30:15.000 } 00:30:15.000 ]' 00:30:15.000 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="42dc27d8-a50d-41dd-bff4-7f42ef7b3504") .free_clusters' 00:30:15.000 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:30:15.000 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="42dc27d8-a50d-41dd-bff4-7f42ef7b3504") .cluster_size' 00:30:15.000 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:15.000 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:30:15.000 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:30:15.000 951388 00:30:15.000 05:50:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:15.566 7f2feda1-5261-4dc9-bf98-397de1a24b89 00:30:15.566 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:15.860 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:16.118 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:16.376 05:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:16.633 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:16.634 fio-3.35 00:30:16.634 Starting 1 thread 00:30:16.634 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.158 00:30:19.158 test: (groupid=0, jobs=1): err= 0: pid=1736247: Thu Jul 25 05:50:12 2024 00:30:19.158 read: IOPS=5057, BW=19.8MiB/s (20.7MB/s)(39.7MiB/2009msec) 00:30:19.158 slat (nsec): min=1838, max=161841, avg=2597.91, stdev=2551.93 00:30:19.158 clat (usec): min=5333, max=22338, avg=13896.47, stdev=1258.11 00:30:19.158 lat (usec): min=5342, max=22341, avg=13899.06, stdev=1257.98 00:30:19.158 clat percentiles (usec): 00:30:19.158 | 1.00th=[10945], 5.00th=[11863], 10.00th=[12387], 20.00th=[12911], 00:30:19.158 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13960], 60.00th=[14222], 00:30:19.158 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:30:19.158 | 99.00th=[16712], 99.50th=[17171], 99.90th=[19792], 99.95th=[21103], 00:30:19.158 | 99.99th=[22152] 00:30:19.158 bw ( KiB/s): min=18872, max=20784, per=99.79%, avg=20186.00, stdev=893.86, samples=4 00:30:19.158 iops : min= 4718, max= 5196, avg=5046.50, stdev=223.47, samples=4 00:30:19.158 write: IOPS=5053, BW=19.7MiB/s (20.7MB/s)(39.7MiB/2009msec); 0 zone resets 00:30:19.158 slat (nsec): min=1955, max=152282, avg=2710.54, stdev=2142.45 00:30:19.158 clat (usec): min=2709, max=21082, avg=11191.10, stdev=1041.51 00:30:19.158 lat (usec): min=2717, max=21084, avg=11193.81, stdev=1041.42 00:30:19.158 clat percentiles (usec): 00:30:19.158 | 1.00th=[ 8848], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:30:19.158 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:30:19.158 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12780], 00:30:19.158 | 99.00th=[13435], 99.50th=[13829], 99.90th=[19268], 99.95th=[19530], 00:30:19.158 | 99.99th=[19792] 00:30:19.158 bw ( KiB/s): min=19928, max=20400, per=99.86%, avg=20184.00, stdev=196.29, samples=4 00:30:19.158 iops : min= 4982, max= 5100, avg=5046.00, stdev=49.07, samples=4 00:30:19.158 lat (msec) : 4=0.02%, 10=5.20%, 20=94.74%, 50=0.04% 00:30:19.158 cpu : usr=51.74%, sys=44.27%, ctx=87, majf=0, minf=38 00:30:19.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:30:19.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:19.158 issued rwts: total=10160,10152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:19.158 00:30:19.158 Run status group 0 (all jobs): 00:30:19.159 READ: bw=19.8MiB/s (20.7MB/s), 19.8MiB/s-19.8MiB/s (20.7MB/s-20.7MB/s), io=39.7MiB (41.6MB), run=2009-2009msec 00:30:19.159 WRITE: bw=19.7MiB/s (20.7MB/s), 19.7MiB/s-19.7MiB/s (20.7MB/s-20.7MB/s), io=39.7MiB (41.6MB), run=2009-2009msec 00:30:19.159 05:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:19.159 05:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:19.159 05:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:23.337 05:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:23.337 05:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:26.617 05:50:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:26.617 05:50:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:28.530 rmmod nvme_tcp 00:30:28.530 rmmod nvme_fabrics 00:30:28.530 rmmod nvme_keyring 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1733532 ']' 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1733532 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1733532 ']' 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1733532 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1733532 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1733532' 00:30:28.530 killing process with pid 1733532 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1733532 00:30:28.530 05:50:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1733532 00:30:28.530 05:50:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:28.530 05:50:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:28.530 05:50:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:28.530 05:50:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:28.530 05:50:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:28.530 05:50:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.530 05:50:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.530 05:50:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:31.058 00:30:31.058 real 0m36.857s 00:30:31.058 user 2m21.096s 00:30:31.058 sys 0m7.067s 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.058 ************************************ 00:30:31.058 END TEST nvmf_fio_host 00:30:31.058 ************************************ 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.058 ************************************ 00:30:31.058 START TEST nvmf_failover 00:30:31.058 ************************************ 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:31.058 * Looking for test storage... 00:30:31.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.058 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:31.059 05:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:32.996 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:32.996 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:32.996 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:32.996 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:32.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:30:32.996 00:30:32.996 --- 10.0.0.2 ping statistics --- 00:30:32.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.996 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:30:32.996 00:30:32.996 --- 10.0.0.1 ping statistics --- 00:30:32.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.996 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:30:32.996 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1739587 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1739587 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1739587 ']' 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:32.997 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:32.997 [2024-07-25 05:50:26.599705] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:30:32.997 [2024-07-25 05:50:26.599801] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.997 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.997 [2024-07-25 05:50:26.664969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:33.255 [2024-07-25 05:50:26.754419] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.255 [2024-07-25 05:50:26.754478] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.255 [2024-07-25 05:50:26.754508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.255 [2024-07-25 05:50:26.754521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.255 [2024-07-25 05:50:26.754531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.255 [2024-07-25 05:50:26.754670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:33.255 [2024-07-25 05:50:26.757264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:33.255 [2024-07-25 05:50:26.757277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.255 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:33.255 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:30:33.255 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:33.255 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:33.255 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:33.255 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.255 05:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:33.513 [2024-07-25 05:50:27.175950] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.513 05:50:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:33.771 Malloc0 00:30:34.029 05:50:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:34.287 05:50:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:34.287 05:50:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.545 [2024-07-25 05:50:28.196697] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.545 05:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:34.803 [2024-07-25 05:50:28.449446] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:34.803 05:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:35.061 [2024-07-25 05:50:28.690215] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:35.061 05:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1739785 00:30:35.061 05:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:35.061 05:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:35.061 05:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1739785 /var/tmp/bdevperf.sock 00:30:35.061 05:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1739785 ']' 00:30:35.061 05:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:35.061 05:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:35.061 05:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:35.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:35.061 05:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:35.061 05:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:35.319 05:50:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:35.319 05:50:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:30:35.319 05:50:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:35.884 NVMe0n1 00:30:35.884 05:50:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:36.142 00:30:36.142 05:50:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1739912 00:30:36.142 05:50:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:36.142 05:50:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:37.076 05:50:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:37.334 [2024-07-25 05:50:30.905780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.905849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.905865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.905878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.905891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.905903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.905916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.905928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.905940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.905953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.905965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.905977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.905989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 [2024-07-25 05:50:30.906365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d290 is same with the state(5) to be set 00:30:37.334 05:50:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:40.615 05:50:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:40.872 00:30:40.872 05:50:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:41.130 [2024-07-25 05:50:34.671773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.671835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.671851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.671876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.671890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.671902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.671914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.671926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.671938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.671950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.671962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.671974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.671986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.671998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 [2024-07-25 05:50:34.672191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e0d0 is same with the state(5) to be set 00:30:41.130 05:50:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:44.465 05:50:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.465 [2024-07-25 05:50:37.931016] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.465 05:50:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:45.397 05:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:45.656 [2024-07-25 05:50:39.205961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.656 [2024-07-25 05:50:39.206843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.206854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.206866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.206877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.206889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.206901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.206913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.206925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.206937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.206948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.206960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.206974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.206986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.206999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.207010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.207022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.207034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.207045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.207057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 [2024-07-25 05:50:39.207069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f460 is same with the state(5) to be set 00:30:45.657 05:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1739912 00:30:52.252 0 00:30:52.252 05:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1739785 00:30:52.252 05:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1739785 ']' 00:30:52.252 05:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1739785 00:30:52.252 05:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:52.252 05:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:52.252 05:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1739785 00:30:52.252 05:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:52.252 05:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:52.252 05:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1739785' 00:30:52.252 killing process with pid 1739785 00:30:52.252 05:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1739785 00:30:52.252 05:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1739785 00:30:52.252 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:52.252 [2024-07-25 05:50:28.753661] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:30:52.252 [2024-07-25 05:50:28.753749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739785 ] 00:30:52.252 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.252 [2024-07-25 05:50:28.817763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.252 [2024-07-25 05:50:28.903085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.252 Running I/O for 15 seconds... 00:30:52.252 [2024-07-25 05:50:30.907051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.252 [2024-07-25 05:50:30.907675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.252 [2024-07-25 05:50:30.907689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.907703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.907718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.907732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.907747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.907761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.907775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.907789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.907804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.907818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.907833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.907850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.907865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.907879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.907894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.907907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.907922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.907935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.907950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.907964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.907979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.907992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.908022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.908050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.908078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.908339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.908369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.908398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.908428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.908457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.908486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.908516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.908545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.908589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.253 [2024-07-25 05:50:30.908621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.253 [2024-07-25 05:50:30.908870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.253 [2024-07-25 05:50:30.908884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.908898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.908912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.908926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.908940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.908954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.908968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.908982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.909978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.909993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.910007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.910021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.910036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.910051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.254 [2024-07-25 05:50:30.910065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.254 [2024-07-25 05:50:30.910099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77744 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77752 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77760 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77768 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77776 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77784 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77792 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77800 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77808 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77816 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77824 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77832 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77840 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77848 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77856 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77864 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77872 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.910954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.910965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77880 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.910978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.910990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.911001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.911012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77888 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.911024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.911037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.911049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.911060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77896 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.911072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.911084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.911095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.911106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77904 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.911119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.911132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.911142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.911154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77912 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.911166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.911178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.911190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.911201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77920 len:8 PRP1 0x0 PRP2 0x0 00:30:52.255 [2024-07-25 05:50:30.911213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.255 [2024-07-25 05:50:30.911226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.255 [2024-07-25 05:50:30.911237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.255 [2024-07-25 05:50:30.911256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77928 len:8 PRP1 0x0 PRP2 0x0 00:30:52.256 [2024-07-25 05:50:30.911269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:30.911282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.256 [2024-07-25 05:50:30.911293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.256 [2024-07-25 05:50:30.911304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77936 len:8 PRP1 0x0 PRP2 0x0 00:30:52.256 [2024-07-25 05:50:30.911324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:30.911337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.256 [2024-07-25 05:50:30.911348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.256 [2024-07-25 05:50:30.911359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77944 len:8 PRP1 0x0 PRP2 0x0 00:30:52.256 [2024-07-25 05:50:30.911371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:30.911384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.256 [2024-07-25 05:50:30.911395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.256 [2024-07-25 05:50:30.911406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77952 len:8 PRP1 0x0 PRP2 0x0 00:30:52.256 [2024-07-25 05:50:30.911419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:30.911431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.256 [2024-07-25 05:50:30.911442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.256 [2024-07-25 05:50:30.911454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77960 len:8 PRP1 0x0 PRP2 0x0 00:30:52.256 [2024-07-25 05:50:30.911466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:30.911479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.256 [2024-07-25 05:50:30.911491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.256 [2024-07-25 05:50:30.911502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77968 len:8 PRP1 0x0 PRP2 0x0 00:30:52.256 [2024-07-25 05:50:30.911515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:30.911528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.256 [2024-07-25 05:50:30.911539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.256 [2024-07-25 05:50:30.911550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77976 len:8 PRP1 0x0 PRP2 0x0 00:30:52.256 [2024-07-25 05:50:30.911563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:30.911576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.256 [2024-07-25 05:50:30.911587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.256 [2024-07-25 05:50:30.911598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77984 len:8 PRP1 0x0 PRP2 0x0 00:30:52.256 [2024-07-25 05:50:30.911611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:30.911677] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x134c400 was disconnected and freed. reset controller. 00:30:52.256 [2024-07-25 05:50:30.911696] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:52.256 [2024-07-25 05:50:30.911732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.256 [2024-07-25 05:50:30.911750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:30.911766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.256 [2024-07-25 05:50:30.911789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:30.911804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.256 [2024-07-25 05:50:30.911817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:30.911830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.256 [2024-07-25 05:50:30.911843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:30.911867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:52.256 [2024-07-25 05:50:30.915117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:52.256 [2024-07-25 05:50:30.915156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1355830 (9): Bad file descriptor 00:30:52.256 [2024-07-25 05:50:31.106170] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:52.256 [2024-07-25 05:50:34.673730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.256 [2024-07-25 05:50:34.673793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:34.673827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.256 [2024-07-25 05:50:34.673843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:34.673860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.256 [2024-07-25 05:50:34.673874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:34.673890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.256 [2024-07-25 05:50:34.673903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:34.673918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.256 [2024-07-25 05:50:34.673932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:34.673947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.256 [2024-07-25 05:50:34.673961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:34.673976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.256 [2024-07-25 05:50:34.673990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:34.674005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.256 [2024-07-25 05:50:34.674019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:34.674034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.256 [2024-07-25 05:50:34.674053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:34.674069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.256 [2024-07-25 05:50:34.674083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:34.674097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.256 [2024-07-25 05:50:34.674111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.256 [2024-07-25 05:50:34.674126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.257 [2024-07-25 05:50:34.674730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.674758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.674786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.674818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.674847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.674875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.674903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.674931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.674960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.674974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.674987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.675002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.675017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.675031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.675045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.675060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.675073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.675088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.675101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.675116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.675130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.675144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.675158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.675176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.675190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.675205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.675219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.675233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.675268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.257 [2024-07-25 05:50:34.675286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.257 [2024-07-25 05:50:34.675300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.675981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.675996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.676010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.676026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.676040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.676056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.676070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.676085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.676099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.676114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.676128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.676143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.258 [2024-07-25 05:50:34.676157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.676192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.258 [2024-07-25 05:50:34.676209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123008 len:8 PRP1 0x0 PRP2 0x0 00:30:52.258 [2024-07-25 05:50:34.676222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.676248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.258 [2024-07-25 05:50:34.676262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.258 [2024-07-25 05:50:34.676274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123016 len:8 PRP1 0x0 PRP2 0x0 00:30:52.258 [2024-07-25 05:50:34.676287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.676300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.258 [2024-07-25 05:50:34.676311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.258 [2024-07-25 05:50:34.676322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123024 len:8 PRP1 0x0 PRP2 0x0 00:30:52.258 [2024-07-25 05:50:34.676335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.676348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.258 [2024-07-25 05:50:34.676363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.258 [2024-07-25 05:50:34.676375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123032 len:8 PRP1 0x0 PRP2 0x0 00:30:52.258 [2024-07-25 05:50:34.676388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.676401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.258 [2024-07-25 05:50:34.676412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.258 [2024-07-25 05:50:34.676423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123040 len:8 PRP1 0x0 PRP2 0x0 00:30:52.258 [2024-07-25 05:50:34.676436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.676450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.258 [2024-07-25 05:50:34.676461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.258 [2024-07-25 05:50:34.676472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123048 len:8 PRP1 0x0 PRP2 0x0 00:30:52.258 [2024-07-25 05:50:34.676484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.258 [2024-07-25 05:50:34.676498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.676509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.676520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123056 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.676533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.676546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.676557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.676568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123064 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.676581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.676594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.676605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.676617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123072 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.676629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.676643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.676654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.676665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123080 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.676678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.676691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.676703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.676714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123088 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.676727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.676743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.676755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.676767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123096 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.676779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.676792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.676803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.676814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123104 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.676827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.676840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.676852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.676863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123112 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.676876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.676889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.676900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.676911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123120 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.676923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.676936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.676948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.676959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123128 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.676971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.676984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.676996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.677007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123136 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.677020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.677033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.677044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.677056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123144 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.677069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.677082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.677093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.677104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123152 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.677120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.677134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.677145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.677156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123160 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.677169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.677182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.677193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.677205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123168 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.677218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.677231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.677247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.677260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123176 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.677273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.677287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.677298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.677309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123184 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.677322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.677335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.677346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.677357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123192 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.677370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.677383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.677394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.677405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123200 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.677418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.677431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.677442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.677454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123208 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.677467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.677480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.677494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.677506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123216 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.677519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.677532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.677544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.677555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123224 len:8 PRP1 0x0 PRP2 0x0 00:30:52.259 [2024-07-25 05:50:34.677568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.259 [2024-07-25 05:50:34.677581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.259 [2024-07-25 05:50:34.677592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.259 [2024-07-25 05:50:34.677603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123232 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.677616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.677629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.677640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.677651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123240 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.677664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.677678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.677689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.677700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123248 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.677713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.677726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.677737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.677748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123256 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.677761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.677775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.677786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.677797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123264 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.677810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.677823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.677834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.677845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123272 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.677858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.677875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.677886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.677897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123280 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.677910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.677923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.677935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.677946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123288 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.677959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.677972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.677983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.677995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123296 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.678007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.678027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.678040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.678051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123304 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.678064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.678077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.678088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.678100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123312 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.678112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.678125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.678136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.678147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123320 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.678160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.678173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.678184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.678195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123328 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.678208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.678221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.678231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.678248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123336 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.678270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.678283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.678295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.678307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123344 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.678320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.678333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.678344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.678355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123352 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.678368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.678380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.678392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.678403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123360 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.678415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.678429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.678440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.678451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123368 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.678463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.678476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.678487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.678499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123376 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.678511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.678524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.678535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.678546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122608 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.678559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.678572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.260 [2024-07-25 05:50:34.678583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.260 [2024-07-25 05:50:34.678594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122616 len:8 PRP1 0x0 PRP2 0x0 00:30:52.260 [2024-07-25 05:50:34.678606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.260 [2024-07-25 05:50:34.678673] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x134e1d0 was disconnected and freed. reset controller. 00:30:52.260 [2024-07-25 05:50:34.678696] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:52.260 [2024-07-25 05:50:34.678732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.261 [2024-07-25 05:50:34.678751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:34.678766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.261 [2024-07-25 05:50:34.678779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:34.678792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.261 [2024-07-25 05:50:34.678805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:34.678818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.261 [2024-07-25 05:50:34.678831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:34.678844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:52.261 [2024-07-25 05:50:34.678885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1355830 (9): Bad file descriptor 00:30:52.261 [2024-07-25 05:50:34.682114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:52.261 [2024-07-25 05:50:34.758989] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:52.261 [2024-07-25 05:50:39.204302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.261 [2024-07-25 05:50:39.204364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.204383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.261 [2024-07-25 05:50:39.204397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.204411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.261 [2024-07-25 05:50:39.204424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.204438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.261 [2024-07-25 05:50:39.204452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.204465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1355830 is same with the state(5) to be set 00:30:52.261 [2024-07-25 05:50:39.207354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.261 [2024-07-25 05:50:39.207381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.207407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.261 [2024-07-25 05:50:39.207423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.207441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.261 [2024-07-25 05:50:39.207461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.207477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.261 [2024-07-25 05:50:39.207491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.207506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.261 [2024-07-25 05:50:39.207520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.207535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.261 [2024-07-25 05:50:39.207564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.207579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.261 [2024-07-25 05:50:39.207592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.207607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.261 [2024-07-25 05:50:39.207621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.207636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.261 [2024-07-25 05:50:39.207649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.207663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.261 [2024-07-25 05:50:39.207676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.207691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.261 [2024-07-25 05:50:39.207705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.261 [2024-07-25 05:50:39.207719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.207732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.207747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.207760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.207775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.207788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.207802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.207816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.207834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.207848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.207863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.207877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.207891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.207905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.207920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.207934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.207948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.207961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.207976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.207989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.208017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.208046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.208074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.208102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.208130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.208158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.208190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.208219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.208268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.208300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.208329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.208359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.262 [2024-07-25 05:50:39.208388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.262 [2024-07-25 05:50:39.208403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.208979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.208994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.263 [2024-07-25 05:50:39.209254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.263 [2024-07-25 05:50:39.209582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.263 [2024-07-25 05:50:39.209596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.209610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.209624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.209638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.209652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.209666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.209680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.209695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.209709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.209726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.209740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.209755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.209784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.209800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.209814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.209830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.209844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.209859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.209873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.209888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.209902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.209917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.209932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.209947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.209961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.209977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.209991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.264 [2024-07-25 05:50:39.210020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.264 [2024-07-25 05:50:39.210050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.264 [2024-07-25 05:50:39.210079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.264 [2024-07-25 05:50:39.210111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.264 [2024-07-25 05:50:39.210141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.264 [2024-07-25 05:50:39.210171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.264 [2024-07-25 05:50:39.210200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.264 [2024-07-25 05:50:39.210753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.264 [2024-07-25 05:50:39.210768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.210783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.265 [2024-07-25 05:50:39.210797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.210812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.265 [2024-07-25 05:50:39.210826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.210841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.265 [2024-07-25 05:50:39.210855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.210874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.265 [2024-07-25 05:50:39.210888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.210903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.265 [2024-07-25 05:50:39.210917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.210932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.265 [2024-07-25 05:50:39.210947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.210962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.265 [2024-07-25 05:50:39.210975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.210990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.265 [2024-07-25 05:50:39.211004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.211019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.265 [2024-07-25 05:50:39.211033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.211061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.265 [2024-07-25 05:50:39.211078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64424 len:8 PRP1 0x0 PRP2 0x0 00:30:52.265 [2024-07-25 05:50:39.211091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.211110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.265 [2024-07-25 05:50:39.211122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.265 [2024-07-25 05:50:39.211134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64432 len:8 PRP1 0x0 PRP2 0x0 00:30:52.265 [2024-07-25 05:50:39.211147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.211160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.265 [2024-07-25 05:50:39.211171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.265 [2024-07-25 05:50:39.211182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64440 len:8 PRP1 0x0 PRP2 0x0 00:30:52.265 [2024-07-25 05:50:39.211195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.211208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.265 [2024-07-25 05:50:39.211219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.265 [2024-07-25 05:50:39.211230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64448 len:8 PRP1 0x0 PRP2 0x0 00:30:52.265 [2024-07-25 05:50:39.211251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.211266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.265 [2024-07-25 05:50:39.211281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.265 [2024-07-25 05:50:39.211293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64456 len:8 PRP1 0x0 PRP2 0x0 00:30:52.265 [2024-07-25 05:50:39.211305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.211318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.265 [2024-07-25 05:50:39.211329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.265 [2024-07-25 05:50:39.211341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64464 len:8 PRP1 0x0 PRP2 0x0 00:30:52.265 [2024-07-25 05:50:39.211354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.265 [2024-07-25 05:50:39.211411] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13790b0 was disconnected and freed. reset controller. 00:30:52.265 [2024-07-25 05:50:39.211430] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:52.265 [2024-07-25 05:50:39.211446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:52.265 [2024-07-25 05:50:39.214685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:52.265 [2024-07-25 05:50:39.214725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1355830 (9): Bad file descriptor 00:30:52.265 [2024-07-25 05:50:39.245295] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:52.265 00:30:52.265 Latency(us) 00:30:52.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.265 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:52.265 Verification LBA range: start 0x0 length 0x4000 00:30:52.265 NVMe0n1 : 15.01 8533.35 33.33 779.56 0.00 13717.39 813.13 16990.81 00:30:52.265 =================================================================================================================== 00:30:52.265 Total : 8533.35 33.33 779.56 0.00 13717.39 813.13 16990.81 00:30:52.265 Received shutdown signal, test time was about 15.000000 seconds 00:30:52.265 00:30:52.265 Latency(us) 00:30:52.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.265 =================================================================================================================== 00:30:52.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1741764 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1741764 /var/tmp/bdevperf.sock 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1741764 ']' 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:52.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:52.265 [2024-07-25 05:50:45.602648] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:52.265 [2024-07-25 05:50:45.863361] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:52.265 05:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:52.828 NVMe0n1 00:30:52.828 05:50:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:53.084 00:30:53.085 05:50:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:53.341 00:30:53.341 05:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:53.341 05:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:53.649 05:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:53.905 05:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:57.407 05:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:57.407 05:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:57.407 05:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1742430 00:30:57.407 05:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:57.407 05:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1742430 00:30:58.339 0 00:30:58.339 05:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:58.339 [2024-07-25 05:50:45.129208] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:30:58.339 [2024-07-25 05:50:45.129323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741764 ] 00:30:58.339 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.339 [2024-07-25 05:50:45.188769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.339 [2024-07-25 05:50:45.271341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.339 [2024-07-25 05:50:47.476510] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:58.339 [2024-07-25 05:50:47.476603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.339 [2024-07-25 05:50:47.476652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.339 [2024-07-25 05:50:47.476669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.339 [2024-07-25 05:50:47.476683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.339 [2024-07-25 05:50:47.476697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.339 [2024-07-25 05:50:47.476710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.339 [2024-07-25 05:50:47.476724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.339 [2024-07-25 05:50:47.476738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.339 [2024-07-25 05:50:47.476759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.339 [2024-07-25 05:50:47.476803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.339 [2024-07-25 05:50:47.476834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a4830 (9): Bad file descriptor 00:30:58.339 [2024-07-25 05:50:47.610409] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:58.339 Running I/O for 1 seconds... 00:30:58.339 00:30:58.339 Latency(us) 00:30:58.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.339 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:58.339 Verification LBA range: start 0x0 length 0x4000 00:30:58.339 NVMe0n1 : 1.00 8630.08 33.71 0.00 0.00 14767.96 1547.38 14951.92 00:30:58.339 =================================================================================================================== 00:30:58.339 Total : 8630.08 33.71 0.00 0.00 14767.96 1547.38 14951.92 00:30:58.339 05:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:58.339 05:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:58.596 05:50:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:58.854 05:50:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:58.854 05:50:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:59.111 05:50:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:59.368 05:50:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:02.645 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:02.645 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:02.645 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1741764 00:31:02.645 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1741764 ']' 00:31:02.645 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1741764 00:31:02.645 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:31:02.645 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:02.645 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1741764 00:31:02.645 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:02.645 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:02.645 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1741764' 00:31:02.645 killing process with pid 1741764 00:31:02.645 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1741764 00:31:02.645 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1741764 00:31:02.902 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:02.902 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.160 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:03.160 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:03.160 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:03.160 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:03.160 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:03.160 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:03.160 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:03.160 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:03.160 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:03.160 rmmod nvme_tcp 00:31:03.160 rmmod nvme_fabrics 00:31:03.417 rmmod nvme_keyring 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1739587 ']' 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1739587 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1739587 ']' 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1739587 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1739587 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1739587' 00:31:03.417 killing process with pid 1739587 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1739587 00:31:03.417 05:50:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1739587 00:31:03.676 05:50:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:03.676 05:50:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:03.676 05:50:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:03.676 05:50:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:03.676 05:50:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:03.676 05:50:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.676 05:50:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.676 05:50:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.584 05:50:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:05.584 00:31:05.584 real 0m34.931s 00:31:05.584 user 2m3.288s 00:31:05.584 sys 0m5.754s 00:31:05.584 05:50:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:05.584 05:50:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:05.584 ************************************ 00:31:05.584 END TEST nvmf_failover 00:31:05.584 ************************************ 00:31:05.584 05:50:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:05.584 05:50:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:05.584 05:50:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:05.584 05:50:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.584 ************************************ 00:31:05.584 START TEST nvmf_host_discovery 00:31:05.584 ************************************ 00:31:05.584 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:05.842 * Looking for test storage... 00:31:05.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:05.842 05:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:07.744 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:07.744 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:07.744 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.744 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:07.745 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:07.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:31:07.745 00:31:07.745 --- 10.0.0.2 ping statistics --- 00:31:07.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.745 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:31:07.745 00:31:07.745 --- 10.0.0.1 ping statistics --- 00:31:07.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.745 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1745113 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1745113 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1745113 ']' 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:07.745 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.745 [2024-07-25 05:51:01.362350] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:31:07.745 [2024-07-25 05:51:01.362420] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.745 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.745 [2024-07-25 05:51:01.426923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.003 [2024-07-25 05:51:01.517848] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.003 [2024-07-25 05:51:01.517901] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.003 [2024-07-25 05:51:01.517930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.003 [2024-07-25 05:51:01.517942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.003 [2024-07-25 05:51:01.517952] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.003 [2024-07-25 05:51:01.517978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.003 [2024-07-25 05:51:01.657445] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.003 [2024-07-25 05:51:01.665691] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.003 null0 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.003 null1 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1745239 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1745239 /tmp/host.sock 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1745239 ']' 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:08.003 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:08.003 05:51:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.261 [2024-07-25 05:51:01.740108] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:31:08.261 [2024-07-25 05:51:01.740193] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745239 ] 00:31:08.261 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.261 [2024-07-25 05:51:01.804185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.261 [2024-07-25 05:51:01.896845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.519 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.520 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.777 [2024-07-25 05:51:02.299376] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:08.777 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:08.778 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.035 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:31:09.035 05:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:31:09.600 [2024-07-25 05:51:03.076427] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:09.600 [2024-07-25 05:51:03.076459] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:09.600 [2024-07-25 05:51:03.076491] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:09.600 [2024-07-25 05:51:03.163770] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:09.600 [2024-07-25 05:51:03.226461] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:09.600 [2024-07-25 05:51:03.226486] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.866 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:10.123 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:10.124 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.382 [2024-07-25 05:51:03.952186] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:10.382 [2024-07-25 05:51:03.952636] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:10.382 [2024-07-25 05:51:03.952694] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.382 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.383 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.383 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:10.383 05:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:10.383 05:51:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:31:10.383 [2024-07-25 05:51:04.079480] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:10.640 [2024-07-25 05:51:04.180218] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:10.640 [2024-07-25 05:51:04.180254] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:10.641 [2024-07-25 05:51:04.180267] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.575 [2024-07-25 05:51:05.172440] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:11.575 [2024-07-25 05:51:05.172487] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:11.575 [2024-07-25 05:51:05.175571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.575 [2024-07-25 05:51:05.175607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.575 [2024-07-25 05:51:05.175633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.575 [2024-07-25 05:51:05.175657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.575 [2024-07-25 05:51:05.175682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.575 [2024-07-25 05:51:05.175704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.575 [2024-07-25 05:51:05.175720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.575 [2024-07-25 05:51:05.175733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.575 [2024-07-25 05:51:05.175747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a21550 is same with the state(5) to be set 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:11.575 [2024-07-25 05:51:05.185560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a21550 (9): Bad file descriptor 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.575 [2024-07-25 05:51:05.195621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:11.575 [2024-07-25 05:51:05.195893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.575 [2024-07-25 05:51:05.195924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a21550 with addr=10.0.0.2, port=4420 00:31:11.575 [2024-07-25 05:51:05.195942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a21550 is same with the state(5) to be set 00:31:11.575 [2024-07-25 05:51:05.195965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a21550 (9): Bad file descriptor 00:31:11.575 [2024-07-25 05:51:05.195988] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:11.575 [2024-07-25 05:51:05.196002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:11.575 [2024-07-25 05:51:05.196019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:11.575 [2024-07-25 05:51:05.196055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.575 [2024-07-25 05:51:05.205709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:11.575 [2024-07-25 05:51:05.205942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.575 [2024-07-25 05:51:05.205969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a21550 with addr=10.0.0.2, port=4420 00:31:11.575 [2024-07-25 05:51:05.205985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a21550 is same with the state(5) to be set 00:31:11.575 [2024-07-25 05:51:05.206007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a21550 (9): Bad file descriptor 00:31:11.575 [2024-07-25 05:51:05.206039] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:11.575 [2024-07-25 05:51:05.206056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:11.575 [2024-07-25 05:51:05.206069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:11.575 [2024-07-25 05:51:05.206105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.575 [2024-07-25 05:51:05.215786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:11.575 [2024-07-25 05:51:05.215958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.575 [2024-07-25 05:51:05.215986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a21550 with addr=10.0.0.2, port=4420 00:31:11.575 [2024-07-25 05:51:05.216006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a21550 is same with the state(5) to be set 00:31:11.575 [2024-07-25 05:51:05.216028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a21550 (9): Bad file descriptor 00:31:11.575 [2024-07-25 05:51:05.216048] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:11.575 [2024-07-25 05:51:05.216062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:11.575 [2024-07-25 05:51:05.216089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:11.575 [2024-07-25 05:51:05.216113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.575 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:11.576 [2024-07-25 05:51:05.225863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:11.576 [2024-07-25 05:51:05.226101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.576 [2024-07-25 05:51:05.226133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a21550 with addr=10.0.0.2, port=4420 00:31:11.576 [2024-07-25 05:51:05.226150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a21550 is same with the state(5) to be set 00:31:11.576 [2024-07-25 05:51:05.226172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a21550 (9): Bad file descriptor 00:31:11.576 [2024-07-25 05:51:05.227138] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:11.576 [2024-07-25 05:51:05.227166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:11.576 [2024-07-25 05:51:05.227182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:11.576 [2024-07-25 05:51:05.227219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.576 [2024-07-25 05:51:05.235953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:11.576 [2024-07-25 05:51:05.236156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.576 [2024-07-25 05:51:05.236184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a21550 with addr=10.0.0.2, port=4420 00:31:11.576 [2024-07-25 05:51:05.236200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a21550 is same with the state(5) to be set 00:31:11.576 [2024-07-25 05:51:05.236222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a21550 (9): Bad file descriptor 00:31:11.576 [2024-07-25 05:51:05.236266] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:11.576 [2024-07-25 05:51:05.236286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:11.576 [2024-07-25 05:51:05.236300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:11.576 [2024-07-25 05:51:05.236325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.576 [2024-07-25 05:51:05.246032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:11.576 [2024-07-25 05:51:05.246217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.576 [2024-07-25 05:51:05.246254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a21550 with addr=10.0.0.2, port=4420 00:31:11.576 [2024-07-25 05:51:05.246273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a21550 is same with the state(5) to be set 00:31:11.576 [2024-07-25 05:51:05.246306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a21550 (9): Bad file descriptor 00:31:11.576 [2024-07-25 05:51:05.246349] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:11.576 [2024-07-25 05:51:05.246367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:11.576 [2024-07-25 05:51:05.246381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:11.576 [2024-07-25 05:51:05.246399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.576 [2024-07-25 05:51:05.256109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:11.576 [2024-07-25 05:51:05.256319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.576 [2024-07-25 05:51:05.256348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a21550 with addr=10.0.0.2, port=4420 00:31:11.576 [2024-07-25 05:51:05.256364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a21550 is same with the state(5) to be set 00:31:11.576 [2024-07-25 05:51:05.256386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a21550 (9): Bad file descriptor 00:31:11.576 [2024-07-25 05:51:05.256417] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:11.576 [2024-07-25 05:51:05.256434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:11.576 [2024-07-25 05:51:05.256447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:11.576 [2024-07-25 05:51:05.256468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:11.576 [2024-07-25 05:51:05.258426] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:11.576 [2024-07-25 05:51:05.258458] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.576 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.835 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.836 05:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.211 [2024-07-25 05:51:06.514751] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:13.211 [2024-07-25 05:51:06.514778] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:13.211 [2024-07-25 05:51:06.514803] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:13.211 [2024-07-25 05:51:06.602076] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:13.211 [2024-07-25 05:51:06.749695] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:13.211 [2024-07-25 05:51:06.749737] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:13.211 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.211 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:13.211 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:31:13.211 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:13.211 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.212 request: 00:31:13.212 { 00:31:13.212 "name": "nvme", 00:31:13.212 "trtype": "tcp", 00:31:13.212 "traddr": "10.0.0.2", 00:31:13.212 "adrfam": "ipv4", 00:31:13.212 "trsvcid": "8009", 00:31:13.212 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:13.212 "wait_for_attach": true, 00:31:13.212 "method": "bdev_nvme_start_discovery", 00:31:13.212 "req_id": 1 00:31:13.212 } 00:31:13.212 Got JSON-RPC error response 00:31:13.212 response: 00:31:13.212 { 00:31:13.212 "code": -17, 00:31:13.212 "message": "File exists" 00:31:13.212 } 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.212 request: 00:31:13.212 { 00:31:13.212 "name": "nvme_second", 00:31:13.212 "trtype": "tcp", 00:31:13.212 "traddr": "10.0.0.2", 00:31:13.212 "adrfam": "ipv4", 00:31:13.212 "trsvcid": "8009", 00:31:13.212 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:13.212 "wait_for_attach": true, 00:31:13.212 "method": "bdev_nvme_start_discovery", 00:31:13.212 "req_id": 1 00:31:13.212 } 00:31:13.212 Got JSON-RPC error response 00:31:13.212 response: 00:31:13.212 { 00:31:13.212 "code": -17, 00:31:13.212 "message": "File exists" 00:31:13.212 } 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.212 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:13.470 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:13.471 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:13.471 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:13.471 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.471 05:51:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.405 [2024-07-25 05:51:07.969247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.405 [2024-07-25 05:51:07.969315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a3cc50 with addr=10.0.0.2, port=8010 00:31:14.405 [2024-07-25 05:51:07.969345] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:14.405 [2024-07-25 05:51:07.969359] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:14.405 [2024-07-25 05:51:07.969372] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:15.338 [2024-07-25 05:51:08.971641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.338 [2024-07-25 05:51:08.971680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2ced0 with addr=10.0.0.2, port=8010 00:31:15.338 [2024-07-25 05:51:08.971701] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:15.338 [2024-07-25 05:51:08.971724] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:15.338 [2024-07-25 05:51:08.971738] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:16.711 [2024-07-25 05:51:09.973857] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:16.711 request: 00:31:16.711 { 00:31:16.711 "name": "nvme_second", 00:31:16.711 "trtype": "tcp", 00:31:16.711 "traddr": "10.0.0.2", 00:31:16.711 "adrfam": "ipv4", 00:31:16.711 "trsvcid": "8010", 00:31:16.711 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:16.711 "wait_for_attach": false, 00:31:16.711 "attach_timeout_ms": 3000, 00:31:16.711 "method": "bdev_nvme_start_discovery", 00:31:16.711 "req_id": 1 00:31:16.711 } 00:31:16.711 Got JSON-RPC error response 00:31:16.711 response: 00:31:16.711 { 00:31:16.711 "code": -110, 00:31:16.711 "message": "Connection timed out" 00:31:16.711 } 00:31:16.711 05:51:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:16.711 05:51:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:31:16.711 05:51:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:16.711 05:51:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:16.711 05:51:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:16.711 05:51:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:16.711 05:51:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:16.711 05:51:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:16.711 05:51:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.711 05:51:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.711 05:51:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:16.711 05:51:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:16.711 05:51:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1745239 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:16.711 rmmod nvme_tcp 00:31:16.711 rmmod nvme_fabrics 00:31:16.711 rmmod nvme_keyring 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1745113 ']' 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1745113 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1745113 ']' 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1745113 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1745113 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1745113' 00:31:16.711 killing process with pid 1745113 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1745113 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1745113 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.711 05:51:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.240 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:19.240 00:31:19.240 real 0m13.111s 00:31:19.240 user 0m19.192s 00:31:19.240 sys 0m2.705s 00:31:19.240 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:19.240 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.240 ************************************ 00:31:19.240 END TEST nvmf_host_discovery 00:31:19.240 ************************************ 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.241 ************************************ 00:31:19.241 START TEST nvmf_host_multipath_status 00:31:19.241 ************************************ 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:19.241 * Looking for test storage... 00:31:19.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:19.241 05:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:21.139 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:21.139 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:21.139 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:21.140 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:21.140 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:21.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:31:21.140 00:31:21.140 --- 10.0.0.2 ping statistics --- 00:31:21.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.140 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:31:21.140 00:31:21.140 --- 10.0.0.1 ping statistics --- 00:31:21.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.140 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1748821 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1748821 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1748821 ']' 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:21.140 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:21.140 [2024-07-25 05:51:14.677630] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:31:21.140 [2024-07-25 05:51:14.677716] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.140 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.140 [2024-07-25 05:51:14.746222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:21.140 [2024-07-25 05:51:14.837454] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.140 [2024-07-25 05:51:14.837515] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.140 [2024-07-25 05:51:14.837531] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.140 [2024-07-25 05:51:14.837545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.140 [2024-07-25 05:51:14.837557] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.140 [2024-07-25 05:51:14.837657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.140 [2024-07-25 05:51:14.837666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.518 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:21.518 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:31:21.518 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:21.518 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.518 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:21.518 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.518 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1748821 00:31:21.518 05:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:21.774 [2024-07-25 05:51:15.256887] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.774 05:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:22.031 Malloc0 00:31:22.031 05:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:22.289 05:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:22.547 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.804 [2024-07-25 05:51:16.301176] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.804 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:23.062 [2024-07-25 05:51:16.549854] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:23.062 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1749102 00:31:23.062 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:23.062 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:23.062 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1749102 /var/tmp/bdevperf.sock 00:31:23.062 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1749102 ']' 00:31:23.062 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:23.062 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:23.062 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:23.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:23.062 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:23.062 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:23.320 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:23.320 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:31:23.320 05:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:23.607 05:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:24.170 Nvme0n1 00:31:24.170 05:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:24.733 Nvme0n1 00:31:24.733 05:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:24.733 05:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:26.631 05:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:26.631 05:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:26.888 05:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:27.146 05:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:28.077 05:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:28.077 05:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:28.077 05:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.077 05:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:28.335 05:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.335 05:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:28.335 05:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.335 05:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:28.593 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:28.593 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:28.593 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.593 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:28.851 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.851 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:28.851 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.851 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:29.109 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.109 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:29.109 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.109 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:29.366 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.366 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:29.367 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.367 05:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:29.625 05:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.625 05:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:29.625 05:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:29.885 05:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:30.144 05:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:31.075 05:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:31.075 05:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:31.075 05:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.075 05:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:31.333 05:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:31.333 05:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:31.333 05:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.333 05:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:31.590 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.590 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:31.590 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.590 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:31.848 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.848 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:31.848 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.848 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:32.105 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.105 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:32.105 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.105 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:32.363 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.363 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:32.363 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.363 05:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:32.622 05:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.622 05:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:32.622 05:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:32.880 05:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:33.138 05:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:34.071 05:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:34.071 05:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:34.071 05:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.071 05:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:34.329 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.330 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:34.330 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.330 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:34.588 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:34.588 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:34.588 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.588 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:34.846 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.846 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:34.846 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.846 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:35.104 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.104 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:35.104 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.104 05:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:35.362 05:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.362 05:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:35.362 05:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.362 05:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:35.619 05:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.619 05:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:35.619 05:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:35.876 05:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:36.134 05:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:37.506 05:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:37.506 05:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:37.506 05:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.506 05:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:37.506 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.506 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:37.506 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.506 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:37.764 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.764 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:37.764 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.764 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:38.022 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.022 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:38.022 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.022 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:38.281 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.281 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:38.281 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.281 05:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:38.572 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.572 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:38.572 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.572 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:38.830 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:38.830 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:38.830 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:39.087 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:39.345 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:40.276 05:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:40.276 05:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:40.276 05:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.276 05:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:40.532 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.532 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:40.532 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.532 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:40.789 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.789 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:40.789 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.789 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:41.046 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.046 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:41.046 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.046 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:41.303 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.303 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:41.303 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.303 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:41.560 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:41.560 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:41.560 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.560 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:41.817 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:41.817 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:41.817 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:42.075 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:42.332 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:43.265 05:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:43.265 05:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:43.265 05:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.265 05:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:43.523 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:43.523 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:43.523 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.523 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:43.781 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.781 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:43.781 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.781 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:44.039 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.039 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:44.039 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.039 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:44.297 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.297 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:44.297 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.297 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:44.555 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:44.555 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:44.555 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.555 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:44.813 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.813 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:45.071 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:45.071 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:45.329 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:45.587 05:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:46.520 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:46.520 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:46.520 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.520 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:46.777 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.777 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:46.777 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.777 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:47.035 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.035 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:47.035 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.035 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:47.294 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.294 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:47.294 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.294 05:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:47.551 05:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.551 05:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:47.551 05:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.551 05:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:47.808 05:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.808 05:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:47.808 05:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.808 05:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:48.065 05:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.065 05:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:48.065 05:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:48.322 05:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:48.578 05:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:49.508 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:49.508 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:49.508 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.508 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:49.766 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:49.766 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:49.766 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.766 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:50.023 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.023 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:50.023 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.023 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:50.281 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.281 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:50.281 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.281 05:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:50.538 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.538 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:50.538 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.538 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:50.795 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.795 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:50.795 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.795 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:51.053 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.053 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:51.053 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:51.311 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:51.568 05:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:52.501 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:52.501 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:52.501 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.501 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:52.758 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.758 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:52.758 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.758 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:53.020 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.020 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:53.020 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.020 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:53.321 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.321 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:53.321 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.321 05:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:53.579 05:51:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.579 05:51:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:53.579 05:51:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.579 05:51:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:53.836 05:51:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.836 05:51:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:53.837 05:51:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.837 05:51:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:54.094 05:51:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.094 05:51:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:54.094 05:51:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:54.352 05:51:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:54.611 05:51:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:55.545 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:55.545 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:55.546 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.546 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:55.804 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.804 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:55.804 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.804 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:56.061 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:56.061 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:56.061 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.061 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:56.319 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.319 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:56.319 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.319 05:51:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:56.577 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.578 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:56.578 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.578 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:56.836 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.836 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:56.836 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.836 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:57.093 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:57.093 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1749102 00:31:57.093 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1749102 ']' 00:31:57.093 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1749102 00:31:57.094 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:31:57.094 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:57.094 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1749102 00:31:57.094 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:31:57.094 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:31:57.094 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1749102' 00:31:57.094 killing process with pid 1749102 00:31:57.094 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1749102 00:31:57.094 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1749102 00:31:57.354 Connection closed with partial response: 00:31:57.354 00:31:57.354 00:31:57.354 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1749102 00:31:57.354 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:57.354 [2024-07-25 05:51:16.611598] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:31:57.354 [2024-07-25 05:51:16.611677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749102 ] 00:31:57.354 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.354 [2024-07-25 05:51:16.669012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.354 [2024-07-25 05:51:16.752843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:57.354 Running I/O for 90 seconds... 00:31:57.354 [2024-07-25 05:51:32.583726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.354 [2024-07-25 05:51:32.583777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.354 [2024-07-25 05:51:32.583863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.354 [2024-07-25 05:51:32.583886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:57.354 [2024-07-25 05:51:32.583912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.354 [2024-07-25 05:51:32.583930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:57.354 [2024-07-25 05:51:32.583954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.354 [2024-07-25 05:51:32.583972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:57.354 [2024-07-25 05:51:32.583996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.354 [2024-07-25 05:51:32.584012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:57.354 [2024-07-25 05:51:32.584034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.354 [2024-07-25 05:51:32.584051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:57.354 [2024-07-25 05:51:32.584073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.354 [2024-07-25 05:51:32.584089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:57.354 [2024-07-25 05:51:32.584127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.354 [2024-07-25 05:51:32.584143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:57.354 [2024-07-25 05:51:32.584180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.584973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.584988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.355 [2024-07-25 05:51:32.585756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:57.355 [2024-07-25 05:51:32.585778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.585794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.585816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.585832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.585871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.585892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.585917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.585933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.585956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.585973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.585996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.586980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.586997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:57.356 [2024-07-25 05:51:32.587617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.356 [2024-07-25 05:51:32.587633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.587665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.587682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.587708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.587725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.587753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.587770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.587797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.587814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.587841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.587858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.587885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.587902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.587929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.587945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.587972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.587989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.357 [2024-07-25 05:51:32.588839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.357 [2024-07-25 05:51:32.588882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.357 [2024-07-25 05:51:32.588926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.357 [2024-07-25 05:51:32.588970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.588997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.357 [2024-07-25 05:51:32.589014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.589041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.357 [2024-07-25 05:51:32.589057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.589084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.357 [2024-07-25 05:51:32.589101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.589128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.357 [2024-07-25 05:51:32.589144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.589170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.357 [2024-07-25 05:51:32.589187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.589214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.357 [2024-07-25 05:51:32.589230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.589275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.357 [2024-07-25 05:51:32.589293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:57.357 [2024-07-25 05:51:32.589325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.357 [2024-07-25 05:51:32.589342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:32.589369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.358 [2024-07-25 05:51:32.589386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:32.589412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.358 [2024-07-25 05:51:32.589429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:32.589456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.358 [2024-07-25 05:51:32.589472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:32.589499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.358 [2024-07-25 05:51:32.589516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:32.589543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:32.589559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:32.589602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.358 [2024-07-25 05:51:32.589621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.358 [2024-07-25 05:51:48.197376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.358 [2024-07-25 05:51:48.197413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.358 [2024-07-25 05:51:48.197451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.197973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.197992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.198014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.198029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.198050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.198065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.198085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.198100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.198121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.198137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.198157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.198172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.198193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.198209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.198254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.358 [2024-07-25 05:51:48.198272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.198310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.358 [2024-07-25 05:51:48.198328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.198350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.358 [2024-07-25 05:51:48.198367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.199544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.199570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.199598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.199615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.199638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.199660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.199683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.199699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:57.358 [2024-07-25 05:51:48.199721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.358 [2024-07-25 05:51:48.199737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.199758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.199774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.199797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.199813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.199835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.359 [2024-07-25 05:51:48.199852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.199873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.359 [2024-07-25 05:51:48.199905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.199928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.359 [2024-07-25 05:51:48.199943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.199982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.359 [2024-07-25 05:51:48.199999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.200278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.200324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.200363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.200401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.200446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.200483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.200522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.359 [2024-07-25 05:51:48.200559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.359 [2024-07-25 05:51:48.200598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.359 [2024-07-25 05:51:48.200636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.359 [2024-07-25 05:51:48.200675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.359 [2024-07-25 05:51:48.200714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.359 [2024-07-25 05:51:48.200752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.359 [2024-07-25 05:51:48.200806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.200842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.200858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.202379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.202404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.202436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.202455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.202478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.202494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.202516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.202548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.202571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.202587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.202624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:57.359 [2024-07-25 05:51:48.202640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.202661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.359 [2024-07-25 05:51:48.202676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:57.359 [2024-07-25 05:51:48.202697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.360 [2024-07-25 05:51:48.202712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:57.360 [2024-07-25 05:51:48.202732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.360 [2024-07-25 05:51:48.202748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:57.360 [2024-07-25 05:51:48.202769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.360 [2024-07-25 05:51:48.202784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:57.360 [2024-07-25 05:51:48.202805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.360 [2024-07-25 05:51:48.202820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:57.360 Received shutdown signal, test time was about 32.426631 seconds 00:31:57.360 00:31:57.360 Latency(us) 00:31:57.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.360 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:57.360 Verification LBA range: start 0x0 length 0x4000 00:31:57.360 Nvme0n1 : 32.43 7978.09 31.16 0.00 0.00 16017.41 1177.22 4026531.84 00:31:57.360 =================================================================================================================== 00:31:57.360 Total : 7978.09 31.16 0.00 0.00 16017.41 1177.22 4026531.84 00:31:57.360 05:51:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:57.618 rmmod nvme_tcp 00:31:57.618 rmmod nvme_fabrics 00:31:57.618 rmmod nvme_keyring 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1748821 ']' 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1748821 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1748821 ']' 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1748821 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1748821 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1748821' 00:31:57.618 killing process with pid 1748821 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1748821 00:31:57.618 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1748821 00:31:57.876 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:57.876 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:57.876 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:57.876 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:57.876 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:57.876 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.876 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.876 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:00.404 00:32:00.404 real 0m41.160s 00:32:00.404 user 2m4.217s 00:32:00.404 sys 0m10.394s 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:00.404 ************************************ 00:32:00.404 END TEST nvmf_host_multipath_status 00:32:00.404 ************************************ 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.404 ************************************ 00:32:00.404 START TEST nvmf_discovery_remove_ifc 00:32:00.404 ************************************ 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:00.404 * Looking for test storage... 00:32:00.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:00.404 05:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:02.304 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:02.304 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:02.304 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:02.304 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:02.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:02.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:32:02.304 00:32:02.304 --- 10.0.0.2 ping statistics --- 00:32:02.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.304 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:02.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:02.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:32:02.304 00:32:02.304 --- 10.0.0.1 ping statistics --- 00:32:02.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.304 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1755173 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1755173 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1755173 ']' 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:02.304 05:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.304 [2024-07-25 05:51:55.802926] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:32:02.304 [2024-07-25 05:51:55.803005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:02.304 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.304 [2024-07-25 05:51:55.869298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.304 [2024-07-25 05:51:55.960757] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:02.304 [2024-07-25 05:51:55.960813] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:02.304 [2024-07-25 05:51:55.960838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:02.304 [2024-07-25 05:51:55.960852] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:02.304 [2024-07-25 05:51:55.960864] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:02.304 [2024-07-25 05:51:55.960894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.562 [2024-07-25 05:51:56.116456] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.562 [2024-07-25 05:51:56.124689] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:02.562 null0 00:32:02.562 [2024-07-25 05:51:56.156622] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1755312 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1755312 /tmp/host.sock 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1755312 ']' 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:02.562 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:02.562 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.562 [2024-07-25 05:51:56.222864] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:32:02.562 [2024-07-25 05:51:56.222932] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755312 ] 00:32:02.562 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.820 [2024-07-25 05:51:56.284013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.820 [2024-07-25 05:51:56.373996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.820 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:02.820 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:32:02.820 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:02.820 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:02.820 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.820 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.820 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.820 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:02.820 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.820 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.078 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.078 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:03.078 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.078 05:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.011 [2024-07-25 05:51:57.596164] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:04.011 [2024-07-25 05:51:57.596192] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:04.011 [2024-07-25 05:51:57.596219] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:04.269 [2024-07-25 05:51:57.723692] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:04.269 [2024-07-25 05:51:57.908648] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:04.269 [2024-07-25 05:51:57.908718] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:04.269 [2024-07-25 05:51:57.908761] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:04.269 [2024-07-25 05:51:57.908788] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:04.269 [2024-07-25 05:51:57.908815] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:04.269 05:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.269 05:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:04.269 05:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.269 05:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.269 [2024-07-25 05:51:57.913156] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14a0340 was disconnected and freed. delete nvme_qpair. 00:32:04.269 05:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.269 05:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.269 05:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.269 05:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.269 05:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.269 05:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.269 05:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:04.269 05:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:04.269 05:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:04.527 05:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:04.527 05:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.527 05:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.527 05:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.527 05:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.527 05:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.527 05:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.527 05:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.527 05:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.527 05:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:04.527 05:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.461 05:51:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.461 05:51:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.461 05:51:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.461 05:51:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.461 05:51:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.461 05:51:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.461 05:51:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.461 05:51:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.461 05:51:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:05.461 05:51:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:06.833 05:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:06.833 05:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:06.833 05:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.833 05:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:06.833 05:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.833 05:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:06.833 05:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:06.833 05:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.833 05:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:06.833 05:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:07.766 05:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.766 05:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.766 05:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.766 05:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.766 05:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.766 05:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.766 05:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.766 05:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.766 05:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:07.766 05:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.699 05:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:08.699 05:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.699 05:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:08.699 05:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.699 05:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.699 05:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:08.699 05:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:08.699 05:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.699 05:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:08.699 05:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:09.655 05:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:09.655 05:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.655 05:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:09.655 05:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.655 05:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.655 05:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:09.655 05:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:09.655 05:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.655 05:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:09.655 05:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:09.912 [2024-07-25 05:52:03.349803] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:09.912 [2024-07-25 05:52:03.349871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.912 [2024-07-25 05:52:03.349904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.912 [2024-07-25 05:52:03.349924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.912 [2024-07-25 05:52:03.349940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.912 [2024-07-25 05:52:03.349955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.912 [2024-07-25 05:52:03.349970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.912 [2024-07-25 05:52:03.349987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.912 [2024-07-25 05:52:03.350002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.912 [2024-07-25 05:52:03.350018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.912 [2024-07-25 05:52:03.350034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.912 [2024-07-25 05:52:03.350049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1466b60 is same with the state(5) to be set 00:32:09.912 [2024-07-25 05:52:03.359826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1466b60 (9): Bad file descriptor 00:32:09.912 [2024-07-25 05:52:03.369883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:10.844 05:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:10.844 05:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.844 05:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:10.844 05:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.844 05:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:10.844 05:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.844 05:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:10.844 [2024-07-25 05:52:04.382294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:10.844 [2024-07-25 05:52:04.382352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1466b60 with addr=10.0.0.2, port=4420 00:32:10.844 [2024-07-25 05:52:04.382378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1466b60 is same with the state(5) to be set 00:32:10.844 [2024-07-25 05:52:04.382431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1466b60 (9): Bad file descriptor 00:32:10.844 [2024-07-25 05:52:04.382903] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:10.844 [2024-07-25 05:52:04.382952] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:10.844 [2024-07-25 05:52:04.382971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:10.844 [2024-07-25 05:52:04.382991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:10.844 [2024-07-25 05:52:04.383020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.844 [2024-07-25 05:52:04.383040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:10.844 05:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.844 05:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:10.844 05:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:11.777 [2024-07-25 05:52:05.385573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.777 [2024-07-25 05:52:05.385627] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.777 [2024-07-25 05:52:05.385642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.777 [2024-07-25 05:52:05.385658] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:11.777 [2024-07-25 05:52:05.385699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.777 [2024-07-25 05:52:05.385741] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:11.777 [2024-07-25 05:52:05.385823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.777 [2024-07-25 05:52:05.385846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.777 [2024-07-25 05:52:05.385868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.777 [2024-07-25 05:52:05.385882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.777 [2024-07-25 05:52:05.385896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.777 [2024-07-25 05:52:05.385909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.777 [2024-07-25 05:52:05.385922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.777 [2024-07-25 05:52:05.385938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.777 [2024-07-25 05:52:05.385952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.777 [2024-07-25 05:52:05.385965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.777 [2024-07-25 05:52:05.385979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:11.777 [2024-07-25 05:52:05.386032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1465f80 (9): Bad file descriptor 00:32:11.777 [2024-07-25 05:52:05.387024] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:11.777 [2024-07-25 05:52:05.387046] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:11.777 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:11.777 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.777 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:11.777 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.777 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:11.777 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.777 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:11.777 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.777 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:11.777 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:11.777 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.035 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:12.035 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:12.035 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.035 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.035 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:12.035 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:12.035 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:12.035 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:12.035 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.035 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:12.035 05:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:12.969 05:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:12.969 05:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.969 05:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:12.969 05:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.969 05:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:12.969 05:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:12.969 05:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:12.969 05:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.969 05:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:12.969 05:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:13.902 [2024-07-25 05:52:07.441408] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:13.902 [2024-07-25 05:52:07.441437] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:13.902 [2024-07-25 05:52:07.441462] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:13.902 [2024-07-25 05:52:07.527744] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:13.902 05:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:13.902 05:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.902 05:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.902 05:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:13.902 05:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.902 05:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:13.902 05:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:13.902 05:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.159 05:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:14.159 05:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:14.159 [2024-07-25 05:52:07.632742] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:14.159 [2024-07-25 05:52:07.632797] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:14.159 [2024-07-25 05:52:07.632834] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:14.159 [2024-07-25 05:52:07.632861] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:14.159 [2024-07-25 05:52:07.632876] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:14.160 [2024-07-25 05:52:07.639092] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x147e4f0 was disconnected and freed. delete nvme_qpair. 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1755312 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1755312 ']' 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1755312 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1755312 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1755312' 00:32:15.093 killing process with pid 1755312 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1755312 00:32:15.093 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1755312 00:32:15.350 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:15.350 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:15.350 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:15.350 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:15.350 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:15.350 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:15.350 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:15.350 rmmod nvme_tcp 00:32:15.350 rmmod nvme_fabrics 00:32:15.350 rmmod nvme_keyring 00:32:15.350 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:15.350 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:15.350 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:15.350 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1755173 ']' 00:32:15.350 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1755173 00:32:15.351 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1755173 ']' 00:32:15.351 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1755173 00:32:15.351 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:32:15.351 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:15.351 05:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1755173 00:32:15.351 05:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:15.351 05:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:15.351 05:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1755173' 00:32:15.351 killing process with pid 1755173 00:32:15.351 05:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1755173 00:32:15.351 05:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1755173 00:32:15.608 05:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:15.608 05:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:15.608 05:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:15.608 05:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:15.608 05:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:15.608 05:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.608 05:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.608 05:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:18.136 00:32:18.136 real 0m17.664s 00:32:18.136 user 0m25.678s 00:32:18.136 sys 0m3.042s 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.136 ************************************ 00:32:18.136 END TEST nvmf_discovery_remove_ifc 00:32:18.136 ************************************ 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.136 ************************************ 00:32:18.136 START TEST nvmf_identify_kernel_target 00:32:18.136 ************************************ 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:18.136 * Looking for test storage... 00:32:18.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:18.136 05:52:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:20.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:20.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:20.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.036 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:20.037 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:20.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:32:20.037 00:32:20.037 --- 10.0.0.2 ping statistics --- 00:32:20.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.037 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:32:20.037 00:32:20.037 --- 10.0.0.1 ping statistics --- 00:32:20.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.037 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:20.037 05:52:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:21.411 Waiting for block devices as requested 00:32:21.411 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:21.411 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:21.411 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:21.411 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:21.669 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:21.669 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:21.669 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:21.669 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:21.669 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:21.927 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:21.927 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:21.927 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:22.186 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:22.186 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:22.186 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:22.186 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:22.444 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:22.445 No valid GPT data, bailing 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:22.445 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:22.704 00:32:22.704 Discovery Log Number of Records 2, Generation counter 2 00:32:22.704 =====Discovery Log Entry 0====== 00:32:22.704 trtype: tcp 00:32:22.704 adrfam: ipv4 00:32:22.704 subtype: current discovery subsystem 00:32:22.704 treq: not specified, sq flow control disable supported 00:32:22.704 portid: 1 00:32:22.704 trsvcid: 4420 00:32:22.704 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:22.704 traddr: 10.0.0.1 00:32:22.704 eflags: none 00:32:22.704 sectype: none 00:32:22.704 =====Discovery Log Entry 1====== 00:32:22.704 trtype: tcp 00:32:22.704 adrfam: ipv4 00:32:22.704 subtype: nvme subsystem 00:32:22.704 treq: not specified, sq flow control disable supported 00:32:22.704 portid: 1 00:32:22.704 trsvcid: 4420 00:32:22.704 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:22.704 traddr: 10.0.0.1 00:32:22.704 eflags: none 00:32:22.704 sectype: none 00:32:22.704 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:22.704 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:22.704 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.704 ===================================================== 00:32:22.704 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:22.704 ===================================================== 00:32:22.704 Controller Capabilities/Features 00:32:22.704 ================================ 00:32:22.704 Vendor ID: 0000 00:32:22.704 Subsystem Vendor ID: 0000 00:32:22.704 Serial Number: 8e28271a2dc6e0a968c8 00:32:22.704 Model Number: Linux 00:32:22.704 Firmware Version: 6.7.0-68 00:32:22.704 Recommended Arb Burst: 0 00:32:22.704 IEEE OUI Identifier: 00 00 00 00:32:22.704 Multi-path I/O 00:32:22.704 May have multiple subsystem ports: No 00:32:22.704 May have multiple controllers: No 00:32:22.704 Associated with SR-IOV VF: No 00:32:22.704 Max Data Transfer Size: Unlimited 00:32:22.704 Max Number of Namespaces: 0 00:32:22.704 Max Number of I/O Queues: 1024 00:32:22.704 NVMe Specification Version (VS): 1.3 00:32:22.704 NVMe Specification Version (Identify): 1.3 00:32:22.704 Maximum Queue Entries: 1024 00:32:22.704 Contiguous Queues Required: No 00:32:22.704 Arbitration Mechanisms Supported 00:32:22.704 Weighted Round Robin: Not Supported 00:32:22.704 Vendor Specific: Not Supported 00:32:22.704 Reset Timeout: 7500 ms 00:32:22.704 Doorbell Stride: 4 bytes 00:32:22.704 NVM Subsystem Reset: Not Supported 00:32:22.704 Command Sets Supported 00:32:22.704 NVM Command Set: Supported 00:32:22.704 Boot Partition: Not Supported 00:32:22.704 Memory Page Size Minimum: 4096 bytes 00:32:22.704 Memory Page Size Maximum: 4096 bytes 00:32:22.704 Persistent Memory Region: Not Supported 00:32:22.704 Optional Asynchronous Events Supported 00:32:22.704 Namespace Attribute Notices: Not Supported 00:32:22.705 Firmware Activation Notices: Not Supported 00:32:22.705 ANA Change Notices: Not Supported 00:32:22.705 PLE Aggregate Log Change Notices: Not Supported 00:32:22.705 LBA Status Info Alert Notices: Not Supported 00:32:22.705 EGE Aggregate Log Change Notices: Not Supported 00:32:22.705 Normal NVM Subsystem Shutdown event: Not Supported 00:32:22.705 Zone Descriptor Change Notices: Not Supported 00:32:22.705 Discovery Log Change Notices: Supported 00:32:22.705 Controller Attributes 00:32:22.705 128-bit Host Identifier: Not Supported 00:32:22.705 Non-Operational Permissive Mode: Not Supported 00:32:22.705 NVM Sets: Not Supported 00:32:22.705 Read Recovery Levels: Not Supported 00:32:22.705 Endurance Groups: Not Supported 00:32:22.705 Predictable Latency Mode: Not Supported 00:32:22.705 Traffic Based Keep ALive: Not Supported 00:32:22.705 Namespace Granularity: Not Supported 00:32:22.705 SQ Associations: Not Supported 00:32:22.705 UUID List: Not Supported 00:32:22.705 Multi-Domain Subsystem: Not Supported 00:32:22.705 Fixed Capacity Management: Not Supported 00:32:22.705 Variable Capacity Management: Not Supported 00:32:22.705 Delete Endurance Group: Not Supported 00:32:22.705 Delete NVM Set: Not Supported 00:32:22.705 Extended LBA Formats Supported: Not Supported 00:32:22.705 Flexible Data Placement Supported: Not Supported 00:32:22.705 00:32:22.705 Controller Memory Buffer Support 00:32:22.705 ================================ 00:32:22.705 Supported: No 00:32:22.705 00:32:22.705 Persistent Memory Region Support 00:32:22.705 ================================ 00:32:22.705 Supported: No 00:32:22.705 00:32:22.705 Admin Command Set Attributes 00:32:22.705 ============================ 00:32:22.705 Security Send/Receive: Not Supported 00:32:22.705 Format NVM: Not Supported 00:32:22.705 Firmware Activate/Download: Not Supported 00:32:22.705 Namespace Management: Not Supported 00:32:22.705 Device Self-Test: Not Supported 00:32:22.705 Directives: Not Supported 00:32:22.705 NVMe-MI: Not Supported 00:32:22.705 Virtualization Management: Not Supported 00:32:22.705 Doorbell Buffer Config: Not Supported 00:32:22.705 Get LBA Status Capability: Not Supported 00:32:22.705 Command & Feature Lockdown Capability: Not Supported 00:32:22.705 Abort Command Limit: 1 00:32:22.705 Async Event Request Limit: 1 00:32:22.705 Number of Firmware Slots: N/A 00:32:22.705 Firmware Slot 1 Read-Only: N/A 00:32:22.705 Firmware Activation Without Reset: N/A 00:32:22.705 Multiple Update Detection Support: N/A 00:32:22.705 Firmware Update Granularity: No Information Provided 00:32:22.705 Per-Namespace SMART Log: No 00:32:22.705 Asymmetric Namespace Access Log Page: Not Supported 00:32:22.705 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:22.705 Command Effects Log Page: Not Supported 00:32:22.705 Get Log Page Extended Data: Supported 00:32:22.705 Telemetry Log Pages: Not Supported 00:32:22.705 Persistent Event Log Pages: Not Supported 00:32:22.705 Supported Log Pages Log Page: May Support 00:32:22.705 Commands Supported & Effects Log Page: Not Supported 00:32:22.705 Feature Identifiers & Effects Log Page:May Support 00:32:22.705 NVMe-MI Commands & Effects Log Page: May Support 00:32:22.705 Data Area 4 for Telemetry Log: Not Supported 00:32:22.705 Error Log Page Entries Supported: 1 00:32:22.705 Keep Alive: Not Supported 00:32:22.705 00:32:22.705 NVM Command Set Attributes 00:32:22.705 ========================== 00:32:22.705 Submission Queue Entry Size 00:32:22.705 Max: 1 00:32:22.705 Min: 1 00:32:22.705 Completion Queue Entry Size 00:32:22.705 Max: 1 00:32:22.705 Min: 1 00:32:22.705 Number of Namespaces: 0 00:32:22.705 Compare Command: Not Supported 00:32:22.705 Write Uncorrectable Command: Not Supported 00:32:22.705 Dataset Management Command: Not Supported 00:32:22.705 Write Zeroes Command: Not Supported 00:32:22.705 Set Features Save Field: Not Supported 00:32:22.705 Reservations: Not Supported 00:32:22.705 Timestamp: Not Supported 00:32:22.705 Copy: Not Supported 00:32:22.705 Volatile Write Cache: Not Present 00:32:22.705 Atomic Write Unit (Normal): 1 00:32:22.705 Atomic Write Unit (PFail): 1 00:32:22.705 Atomic Compare & Write Unit: 1 00:32:22.705 Fused Compare & Write: Not Supported 00:32:22.705 Scatter-Gather List 00:32:22.705 SGL Command Set: Supported 00:32:22.705 SGL Keyed: Not Supported 00:32:22.705 SGL Bit Bucket Descriptor: Not Supported 00:32:22.705 SGL Metadata Pointer: Not Supported 00:32:22.705 Oversized SGL: Not Supported 00:32:22.705 SGL Metadata Address: Not Supported 00:32:22.705 SGL Offset: Supported 00:32:22.705 Transport SGL Data Block: Not Supported 00:32:22.705 Replay Protected Memory Block: Not Supported 00:32:22.705 00:32:22.705 Firmware Slot Information 00:32:22.705 ========================= 00:32:22.705 Active slot: 0 00:32:22.705 00:32:22.705 00:32:22.705 Error Log 00:32:22.705 ========= 00:32:22.705 00:32:22.705 Active Namespaces 00:32:22.705 ================= 00:32:22.705 Discovery Log Page 00:32:22.705 ================== 00:32:22.705 Generation Counter: 2 00:32:22.705 Number of Records: 2 00:32:22.705 Record Format: 0 00:32:22.705 00:32:22.705 Discovery Log Entry 0 00:32:22.705 ---------------------- 00:32:22.705 Transport Type: 3 (TCP) 00:32:22.705 Address Family: 1 (IPv4) 00:32:22.705 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:22.705 Entry Flags: 00:32:22.705 Duplicate Returned Information: 0 00:32:22.705 Explicit Persistent Connection Support for Discovery: 0 00:32:22.705 Transport Requirements: 00:32:22.705 Secure Channel: Not Specified 00:32:22.705 Port ID: 1 (0x0001) 00:32:22.705 Controller ID: 65535 (0xffff) 00:32:22.705 Admin Max SQ Size: 32 00:32:22.705 Transport Service Identifier: 4420 00:32:22.705 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:22.705 Transport Address: 10.0.0.1 00:32:22.705 Discovery Log Entry 1 00:32:22.705 ---------------------- 00:32:22.705 Transport Type: 3 (TCP) 00:32:22.705 Address Family: 1 (IPv4) 00:32:22.705 Subsystem Type: 2 (NVM Subsystem) 00:32:22.705 Entry Flags: 00:32:22.705 Duplicate Returned Information: 0 00:32:22.705 Explicit Persistent Connection Support for Discovery: 0 00:32:22.705 Transport Requirements: 00:32:22.705 Secure Channel: Not Specified 00:32:22.705 Port ID: 1 (0x0001) 00:32:22.705 Controller ID: 65535 (0xffff) 00:32:22.705 Admin Max SQ Size: 32 00:32:22.705 Transport Service Identifier: 4420 00:32:22.705 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:22.705 Transport Address: 10.0.0.1 00:32:22.705 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:22.705 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.705 get_feature(0x01) failed 00:32:22.705 get_feature(0x02) failed 00:32:22.705 get_feature(0x04) failed 00:32:22.705 ===================================================== 00:32:22.705 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:22.705 ===================================================== 00:32:22.705 Controller Capabilities/Features 00:32:22.705 ================================ 00:32:22.705 Vendor ID: 0000 00:32:22.705 Subsystem Vendor ID: 0000 00:32:22.705 Serial Number: fd201623846c7c5d827d 00:32:22.705 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:22.705 Firmware Version: 6.7.0-68 00:32:22.705 Recommended Arb Burst: 6 00:32:22.705 IEEE OUI Identifier: 00 00 00 00:32:22.705 Multi-path I/O 00:32:22.705 May have multiple subsystem ports: Yes 00:32:22.705 May have multiple controllers: Yes 00:32:22.705 Associated with SR-IOV VF: No 00:32:22.705 Max Data Transfer Size: Unlimited 00:32:22.705 Max Number of Namespaces: 1024 00:32:22.705 Max Number of I/O Queues: 128 00:32:22.705 NVMe Specification Version (VS): 1.3 00:32:22.705 NVMe Specification Version (Identify): 1.3 00:32:22.705 Maximum Queue Entries: 1024 00:32:22.705 Contiguous Queues Required: No 00:32:22.705 Arbitration Mechanisms Supported 00:32:22.706 Weighted Round Robin: Not Supported 00:32:22.706 Vendor Specific: Not Supported 00:32:22.706 Reset Timeout: 7500 ms 00:32:22.706 Doorbell Stride: 4 bytes 00:32:22.706 NVM Subsystem Reset: Not Supported 00:32:22.706 Command Sets Supported 00:32:22.706 NVM Command Set: Supported 00:32:22.706 Boot Partition: Not Supported 00:32:22.706 Memory Page Size Minimum: 4096 bytes 00:32:22.706 Memory Page Size Maximum: 4096 bytes 00:32:22.706 Persistent Memory Region: Not Supported 00:32:22.706 Optional Asynchronous Events Supported 00:32:22.706 Namespace Attribute Notices: Supported 00:32:22.706 Firmware Activation Notices: Not Supported 00:32:22.706 ANA Change Notices: Supported 00:32:22.706 PLE Aggregate Log Change Notices: Not Supported 00:32:22.706 LBA Status Info Alert Notices: Not Supported 00:32:22.706 EGE Aggregate Log Change Notices: Not Supported 00:32:22.706 Normal NVM Subsystem Shutdown event: Not Supported 00:32:22.706 Zone Descriptor Change Notices: Not Supported 00:32:22.706 Discovery Log Change Notices: Not Supported 00:32:22.706 Controller Attributes 00:32:22.706 128-bit Host Identifier: Supported 00:32:22.706 Non-Operational Permissive Mode: Not Supported 00:32:22.706 NVM Sets: Not Supported 00:32:22.706 Read Recovery Levels: Not Supported 00:32:22.706 Endurance Groups: Not Supported 00:32:22.706 Predictable Latency Mode: Not Supported 00:32:22.706 Traffic Based Keep ALive: Supported 00:32:22.706 Namespace Granularity: Not Supported 00:32:22.706 SQ Associations: Not Supported 00:32:22.706 UUID List: Not Supported 00:32:22.706 Multi-Domain Subsystem: Not Supported 00:32:22.706 Fixed Capacity Management: Not Supported 00:32:22.706 Variable Capacity Management: Not Supported 00:32:22.706 Delete Endurance Group: Not Supported 00:32:22.706 Delete NVM Set: Not Supported 00:32:22.706 Extended LBA Formats Supported: Not Supported 00:32:22.706 Flexible Data Placement Supported: Not Supported 00:32:22.706 00:32:22.706 Controller Memory Buffer Support 00:32:22.706 ================================ 00:32:22.706 Supported: No 00:32:22.706 00:32:22.706 Persistent Memory Region Support 00:32:22.706 ================================ 00:32:22.706 Supported: No 00:32:22.706 00:32:22.706 Admin Command Set Attributes 00:32:22.706 ============================ 00:32:22.706 Security Send/Receive: Not Supported 00:32:22.706 Format NVM: Not Supported 00:32:22.706 Firmware Activate/Download: Not Supported 00:32:22.706 Namespace Management: Not Supported 00:32:22.706 Device Self-Test: Not Supported 00:32:22.706 Directives: Not Supported 00:32:22.706 NVMe-MI: Not Supported 00:32:22.706 Virtualization Management: Not Supported 00:32:22.706 Doorbell Buffer Config: Not Supported 00:32:22.706 Get LBA Status Capability: Not Supported 00:32:22.706 Command & Feature Lockdown Capability: Not Supported 00:32:22.706 Abort Command Limit: 4 00:32:22.706 Async Event Request Limit: 4 00:32:22.706 Number of Firmware Slots: N/A 00:32:22.706 Firmware Slot 1 Read-Only: N/A 00:32:22.706 Firmware Activation Without Reset: N/A 00:32:22.706 Multiple Update Detection Support: N/A 00:32:22.706 Firmware Update Granularity: No Information Provided 00:32:22.706 Per-Namespace SMART Log: Yes 00:32:22.706 Asymmetric Namespace Access Log Page: Supported 00:32:22.706 ANA Transition Time : 10 sec 00:32:22.706 00:32:22.706 Asymmetric Namespace Access Capabilities 00:32:22.706 ANA Optimized State : Supported 00:32:22.706 ANA Non-Optimized State : Supported 00:32:22.706 ANA Inaccessible State : Supported 00:32:22.706 ANA Persistent Loss State : Supported 00:32:22.706 ANA Change State : Supported 00:32:22.706 ANAGRPID is not changed : No 00:32:22.706 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:22.706 00:32:22.706 ANA Group Identifier Maximum : 128 00:32:22.706 Number of ANA Group Identifiers : 128 00:32:22.706 Max Number of Allowed Namespaces : 1024 00:32:22.706 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:22.706 Command Effects Log Page: Supported 00:32:22.706 Get Log Page Extended Data: Supported 00:32:22.706 Telemetry Log Pages: Not Supported 00:32:22.706 Persistent Event Log Pages: Not Supported 00:32:22.706 Supported Log Pages Log Page: May Support 00:32:22.706 Commands Supported & Effects Log Page: Not Supported 00:32:22.706 Feature Identifiers & Effects Log Page:May Support 00:32:22.706 NVMe-MI Commands & Effects Log Page: May Support 00:32:22.706 Data Area 4 for Telemetry Log: Not Supported 00:32:22.706 Error Log Page Entries Supported: 128 00:32:22.706 Keep Alive: Supported 00:32:22.706 Keep Alive Granularity: 1000 ms 00:32:22.706 00:32:22.706 NVM Command Set Attributes 00:32:22.706 ========================== 00:32:22.706 Submission Queue Entry Size 00:32:22.706 Max: 64 00:32:22.706 Min: 64 00:32:22.706 Completion Queue Entry Size 00:32:22.706 Max: 16 00:32:22.706 Min: 16 00:32:22.706 Number of Namespaces: 1024 00:32:22.706 Compare Command: Not Supported 00:32:22.706 Write Uncorrectable Command: Not Supported 00:32:22.706 Dataset Management Command: Supported 00:32:22.706 Write Zeroes Command: Supported 00:32:22.706 Set Features Save Field: Not Supported 00:32:22.706 Reservations: Not Supported 00:32:22.706 Timestamp: Not Supported 00:32:22.706 Copy: Not Supported 00:32:22.706 Volatile Write Cache: Present 00:32:22.706 Atomic Write Unit (Normal): 1 00:32:22.706 Atomic Write Unit (PFail): 1 00:32:22.706 Atomic Compare & Write Unit: 1 00:32:22.706 Fused Compare & Write: Not Supported 00:32:22.706 Scatter-Gather List 00:32:22.706 SGL Command Set: Supported 00:32:22.706 SGL Keyed: Not Supported 00:32:22.706 SGL Bit Bucket Descriptor: Not Supported 00:32:22.706 SGL Metadata Pointer: Not Supported 00:32:22.706 Oversized SGL: Not Supported 00:32:22.706 SGL Metadata Address: Not Supported 00:32:22.706 SGL Offset: Supported 00:32:22.706 Transport SGL Data Block: Not Supported 00:32:22.706 Replay Protected Memory Block: Not Supported 00:32:22.706 00:32:22.706 Firmware Slot Information 00:32:22.706 ========================= 00:32:22.706 Active slot: 0 00:32:22.706 00:32:22.706 Asymmetric Namespace Access 00:32:22.706 =========================== 00:32:22.706 Change Count : 0 00:32:22.706 Number of ANA Group Descriptors : 1 00:32:22.706 ANA Group Descriptor : 0 00:32:22.706 ANA Group ID : 1 00:32:22.706 Number of NSID Values : 1 00:32:22.706 Change Count : 0 00:32:22.706 ANA State : 1 00:32:22.706 Namespace Identifier : 1 00:32:22.706 00:32:22.706 Commands Supported and Effects 00:32:22.706 ============================== 00:32:22.706 Admin Commands 00:32:22.706 -------------- 00:32:22.706 Get Log Page (02h): Supported 00:32:22.706 Identify (06h): Supported 00:32:22.706 Abort (08h): Supported 00:32:22.706 Set Features (09h): Supported 00:32:22.706 Get Features (0Ah): Supported 00:32:22.706 Asynchronous Event Request (0Ch): Supported 00:32:22.706 Keep Alive (18h): Supported 00:32:22.706 I/O Commands 00:32:22.706 ------------ 00:32:22.706 Flush (00h): Supported 00:32:22.706 Write (01h): Supported LBA-Change 00:32:22.706 Read (02h): Supported 00:32:22.706 Write Zeroes (08h): Supported LBA-Change 00:32:22.706 Dataset Management (09h): Supported 00:32:22.706 00:32:22.706 Error Log 00:32:22.706 ========= 00:32:22.706 Entry: 0 00:32:22.706 Error Count: 0x3 00:32:22.706 Submission Queue Id: 0x0 00:32:22.706 Command Id: 0x5 00:32:22.706 Phase Bit: 0 00:32:22.706 Status Code: 0x2 00:32:22.706 Status Code Type: 0x0 00:32:22.706 Do Not Retry: 1 00:32:22.706 Error Location: 0x28 00:32:22.706 LBA: 0x0 00:32:22.706 Namespace: 0x0 00:32:22.706 Vendor Log Page: 0x0 00:32:22.706 ----------- 00:32:22.706 Entry: 1 00:32:22.706 Error Count: 0x2 00:32:22.706 Submission Queue Id: 0x0 00:32:22.706 Command Id: 0x5 00:32:22.706 Phase Bit: 0 00:32:22.706 Status Code: 0x2 00:32:22.706 Status Code Type: 0x0 00:32:22.706 Do Not Retry: 1 00:32:22.706 Error Location: 0x28 00:32:22.706 LBA: 0x0 00:32:22.706 Namespace: 0x0 00:32:22.706 Vendor Log Page: 0x0 00:32:22.706 ----------- 00:32:22.706 Entry: 2 00:32:22.706 Error Count: 0x1 00:32:22.706 Submission Queue Id: 0x0 00:32:22.706 Command Id: 0x4 00:32:22.706 Phase Bit: 0 00:32:22.706 Status Code: 0x2 00:32:22.706 Status Code Type: 0x0 00:32:22.706 Do Not Retry: 1 00:32:22.706 Error Location: 0x28 00:32:22.706 LBA: 0x0 00:32:22.706 Namespace: 0x0 00:32:22.707 Vendor Log Page: 0x0 00:32:22.707 00:32:22.707 Number of Queues 00:32:22.707 ================ 00:32:22.707 Number of I/O Submission Queues: 128 00:32:22.707 Number of I/O Completion Queues: 128 00:32:22.707 00:32:22.707 ZNS Specific Controller Data 00:32:22.707 ============================ 00:32:22.707 Zone Append Size Limit: 0 00:32:22.707 00:32:22.707 00:32:22.707 Active Namespaces 00:32:22.707 ================= 00:32:22.707 get_feature(0x05) failed 00:32:22.707 Namespace ID:1 00:32:22.707 Command Set Identifier: NVM (00h) 00:32:22.707 Deallocate: Supported 00:32:22.707 Deallocated/Unwritten Error: Not Supported 00:32:22.707 Deallocated Read Value: Unknown 00:32:22.707 Deallocate in Write Zeroes: Not Supported 00:32:22.707 Deallocated Guard Field: 0xFFFF 00:32:22.707 Flush: Supported 00:32:22.707 Reservation: Not Supported 00:32:22.707 Namespace Sharing Capabilities: Multiple Controllers 00:32:22.707 Size (in LBAs): 1953525168 (931GiB) 00:32:22.707 Capacity (in LBAs): 1953525168 (931GiB) 00:32:22.707 Utilization (in LBAs): 1953525168 (931GiB) 00:32:22.707 UUID: 4135b810-2d44-4984-8211-23961d74ddd4 00:32:22.707 Thin Provisioning: Not Supported 00:32:22.707 Per-NS Atomic Units: Yes 00:32:22.707 Atomic Boundary Size (Normal): 0 00:32:22.707 Atomic Boundary Size (PFail): 0 00:32:22.707 Atomic Boundary Offset: 0 00:32:22.707 NGUID/EUI64 Never Reused: No 00:32:22.707 ANA group ID: 1 00:32:22.707 Namespace Write Protected: No 00:32:22.707 Number of LBA Formats: 1 00:32:22.707 Current LBA Format: LBA Format #00 00:32:22.707 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:22.707 00:32:22.707 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:22.707 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:22.707 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:22.707 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:22.707 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:22.707 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:22.707 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:22.707 rmmod nvme_tcp 00:32:22.966 rmmod nvme_fabrics 00:32:22.966 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:22.966 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:22.966 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:22.966 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:22.966 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:22.966 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:22.966 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:22.966 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:22.966 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:22.966 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.966 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.966 05:52:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.866 05:52:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:24.866 05:52:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:24.866 05:52:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:24.866 05:52:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:24.866 05:52:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:24.866 05:52:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:24.866 05:52:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:24.866 05:52:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:24.866 05:52:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:24.866 05:52:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:24.866 05:52:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:26.240 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:26.240 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:26.240 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:26.240 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:26.240 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:26.240 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:26.240 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:26.240 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:26.240 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:26.240 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:26.240 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:26.240 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:26.240 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:26.240 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:26.240 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:26.240 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:27.204 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:27.204 00:32:27.204 real 0m9.552s 00:32:27.204 user 0m2.068s 00:32:27.204 sys 0m3.476s 00:32:27.204 05:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:27.204 05:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:27.204 ************************************ 00:32:27.204 END TEST nvmf_identify_kernel_target 00:32:27.204 ************************************ 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.463 ************************************ 00:32:27.463 START TEST nvmf_auth_host 00:32:27.463 ************************************ 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:27.463 * Looking for test storage... 00:32:27.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.463 05:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.463 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.463 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.463 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.463 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.463 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.463 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.463 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.463 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.463 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.463 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.463 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.463 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:27.464 05:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:29.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.366 05:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:29.366 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:29.366 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:29.366 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.366 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.367 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:29.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:32:29.625 00:32:29.625 --- 10.0.0.2 ping statistics --- 00:32:29.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.625 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:32:29.625 00:32:29.625 --- 10.0.0.1 ping statistics --- 00:32:29.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.625 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1762390 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1762390 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1762390 ']' 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:29.625 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.626 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:29.626 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ed24faaaf12c3053299928824300440a 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xSl 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ed24faaaf12c3053299928824300440a 0 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ed24faaaf12c3053299928824300440a 0 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ed24faaaf12c3053299928824300440a 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xSl 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xSl 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.xSl 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8b83f85cba92fdb63c8299a3d1ec03d257eaf268d3ebaeb19b2cb108d665ef68 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.iWr 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8b83f85cba92fdb63c8299a3d1ec03d257eaf268d3ebaeb19b2cb108d665ef68 3 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8b83f85cba92fdb63c8299a3d1ec03d257eaf268d3ebaeb19b2cb108d665ef68 3 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8b83f85cba92fdb63c8299a3d1ec03d257eaf268d3ebaeb19b2cb108d665ef68 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:29.883 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.iWr 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.iWr 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.iWr 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=32ed46fc2ec82a320000962ef6be5eef172754602d1440df 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Iyu 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 32ed46fc2ec82a320000962ef6be5eef172754602d1440df 0 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 32ed46fc2ec82a320000962ef6be5eef172754602d1440df 0 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=32ed46fc2ec82a320000962ef6be5eef172754602d1440df 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Iyu 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Iyu 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Iyu 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5e23093bdf670b5374813acdc6abedf778ddb253394a3d32 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.dAz 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5e23093bdf670b5374813acdc6abedf778ddb253394a3d32 2 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5e23093bdf670b5374813acdc6abedf778ddb253394a3d32 2 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5e23093bdf670b5374813acdc6abedf778ddb253394a3d32 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.dAz 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.dAz 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.dAz 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=41ac8f9766cabc7b42876097f8301409 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.i9K 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 41ac8f9766cabc7b42876097f8301409 1 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 41ac8f9766cabc7b42876097f8301409 1 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=41ac8f9766cabc7b42876097f8301409 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.i9K 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.i9K 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.i9K 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=420d076cf1507fc9afa4065f7e3d158b 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NxD 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 420d076cf1507fc9afa4065f7e3d158b 1 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 420d076cf1507fc9afa4065f7e3d158b 1 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=420d076cf1507fc9afa4065f7e3d158b 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NxD 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NxD 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.NxD 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=10ce028c58e53f0c2338998334cfe630a6bd048e71439b1a 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ry5 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 10ce028c58e53f0c2338998334cfe630a6bd048e71439b1a 2 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 10ce028c58e53f0c2338998334cfe630a6bd048e71439b1a 2 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=10ce028c58e53f0c2338998334cfe630a6bd048e71439b1a 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:30.140 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ry5 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ry5 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ry5 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b1f7faec966d4604e1b8c70aaa7379ea 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.LLN 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b1f7faec966d4604e1b8c70aaa7379ea 0 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b1f7faec966d4604e1b8c70aaa7379ea 0 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b1f7faec966d4604e1b8c70aaa7379ea 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.LLN 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.LLN 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.LLN 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=698a73ea24c84cd869fe8c3dffc4e144e4bcc822c8711deadf0c59d324f3f18c 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.oCw 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 698a73ea24c84cd869fe8c3dffc4e144e4bcc822c8711deadf0c59d324f3f18c 3 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 698a73ea24c84cd869fe8c3dffc4e144e4bcc822c8711deadf0c59d324f3f18c 3 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=698a73ea24c84cd869fe8c3dffc4e144e4bcc822c8711deadf0c59d324f3f18c 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.oCw 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.oCw 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.oCw 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1762390 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1762390 ']' 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:30.398 05:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xSl 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.iWr ]] 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iWr 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Iyu 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.dAz ]] 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dAz 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.i9K 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.656 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.NxD ]] 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NxD 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ry5 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.LLN ]] 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.LLN 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.oCw 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:30.657 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:30.915 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:30.915 05:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:31.846 Waiting for block devices as requested 00:32:31.846 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:31.846 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:32.103 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:32.103 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:32.103 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:32.360 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:32.360 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:32.360 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:32.360 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:32.618 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:32.618 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:32.618 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:32.875 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:32.875 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:32.875 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:32.875 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:33.134 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:33.392 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:33.392 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:33.392 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:33.392 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:33.392 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:33.392 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:33.392 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:33.392 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:33.392 05:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:33.392 No valid GPT data, bailing 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:33.392 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:33.650 00:32:33.650 Discovery Log Number of Records 2, Generation counter 2 00:32:33.650 =====Discovery Log Entry 0====== 00:32:33.650 trtype: tcp 00:32:33.650 adrfam: ipv4 00:32:33.650 subtype: current discovery subsystem 00:32:33.650 treq: not specified, sq flow control disable supported 00:32:33.650 portid: 1 00:32:33.650 trsvcid: 4420 00:32:33.650 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:33.650 traddr: 10.0.0.1 00:32:33.650 eflags: none 00:32:33.650 sectype: none 00:32:33.650 =====Discovery Log Entry 1====== 00:32:33.650 trtype: tcp 00:32:33.650 adrfam: ipv4 00:32:33.650 subtype: nvme subsystem 00:32:33.650 treq: not specified, sq flow control disable supported 00:32:33.650 portid: 1 00:32:33.650 trsvcid: 4420 00:32:33.650 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:33.650 traddr: 10.0.0.1 00:32:33.650 eflags: none 00:32:33.650 sectype: none 00:32:33.650 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:33.650 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:33.650 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:33.650 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:33.650 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.651 nvme0n1 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.651 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.909 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.910 nvme0n1 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.910 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.168 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.169 nvme0n1 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.169 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.427 nvme0n1 00:32:34.427 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.427 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.427 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.427 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.427 05:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.427 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.685 nvme0n1 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.685 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.943 nvme0n1 00:32:34.943 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.944 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.202 nvme0n1 00:32:35.202 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.202 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.203 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.461 nvme0n1 00:32:35.461 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.461 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.461 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.461 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.461 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.461 05:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.461 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.462 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.720 nvme0n1 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:32:35.720 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.721 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.979 nvme0n1 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.979 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.980 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.238 nvme0n1 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.238 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.239 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.239 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.239 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.239 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.239 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.239 05:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.497 nvme0n1 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:36.497 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.498 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.063 nvme0n1 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.063 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.322 nvme0n1 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.322 05:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.580 nvme0n1 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.580 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.581 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.581 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.581 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.581 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.581 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.581 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.581 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.581 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.581 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.581 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.581 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.581 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.581 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.146 nvme0n1 00:32:38.146 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.146 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.146 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.146 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.146 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.147 05:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.712 nvme0n1 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.712 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.713 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.278 nvme0n1 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.278 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.279 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.279 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.279 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.279 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.279 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.279 05:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.842 nvme0n1 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.842 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.843 05:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.407 nvme0n1 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.407 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.971 nvme0n1 00:32:40.971 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.971 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.971 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.971 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.971 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.971 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.971 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.971 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.971 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.971 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.235 05:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.209 nvme0n1 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.209 05:52:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.141 nvme0n1 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.142 05:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.074 nvme0n1 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.074 05:52:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.007 nvme0n1 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.007 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.265 05:52:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.199 nvme0n1 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.199 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.457 nvme0n1 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.457 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.458 05:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.458 nvme0n1 00:32:46.458 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.458 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.458 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.458 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.458 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.716 nvme0n1 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.716 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.974 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.975 nvme0n1 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.975 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.233 nvme0n1 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.233 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.234 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.234 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.234 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.234 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.234 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.234 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.234 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.234 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.234 05:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.492 nvme0n1 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.492 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.750 nvme0n1 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:47.750 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.751 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.008 nvme0n1 00:32:48.008 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.008 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.008 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.008 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.008 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.008 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.009 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.268 nvme0n1 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:48.268 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.526 05:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.526 nvme0n1 00:32:48.526 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.527 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.785 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.043 nvme0n1 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.043 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.044 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.044 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.044 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.044 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.044 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.302 nvme0n1 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.302 05:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.560 nvme0n1 00:32:49.560 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.560 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.560 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.560 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.560 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.560 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.817 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.818 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.075 nvme0n1 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.075 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.333 nvme0n1 00:32:50.333 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.333 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.333 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.333 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.333 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.333 05:52:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.333 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.590 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.590 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.590 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.591 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.591 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.591 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.591 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.591 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.591 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.591 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.591 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.591 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.591 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:50.591 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.591 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.848 nvme0n1 00:32:50.848 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.848 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.848 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.848 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.848 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.848 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.106 05:52:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.671 nvme0n1 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.671 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.672 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.672 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.672 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.672 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.672 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.672 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:51.672 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.672 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.236 nvme0n1 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.236 05:52:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.801 nvme0n1 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.801 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.802 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.366 nvme0n1 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.366 05:52:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.299 nvme0n1 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.299 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.300 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.300 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.300 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.300 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.300 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.300 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.300 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.300 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.300 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.300 05:52:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.669 nvme0n1 00:32:55.669 05:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.669 05:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.669 05:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.669 05:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.669 05:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.669 05:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.669 05:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.669 05:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.669 05:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.669 05:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.669 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.635 nvme0n1 00:32:56.635 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.635 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.635 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.635 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.635 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.635 05:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.635 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.568 nvme0n1 00:32:57.568 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.568 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.568 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.568 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.568 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.568 05:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.568 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.501 nvme0n1 00:32:58.501 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.501 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.501 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.501 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.501 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.501 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.501 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.501 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.501 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.501 05:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.501 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.502 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:58.502 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.502 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.502 nvme0n1 00:32:58.502 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.502 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.502 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.502 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.502 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.502 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.760 nvme0n1 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.760 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.761 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.019 nvme0n1 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.019 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.020 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.278 nvme0n1 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.278 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.279 05:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.537 nvme0n1 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.537 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.538 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.538 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.538 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.538 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.538 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.538 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.538 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.538 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.538 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.538 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.538 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.796 nvme0n1 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.796 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.054 nvme0n1 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.054 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.312 nvme0n1 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.312 05:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.570 nvme0n1 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.570 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.828 nvme0n1 00:33:00.828 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.828 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.828 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.828 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.829 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.087 nvme0n1 00:33:01.087 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.087 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.087 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.087 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.087 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.087 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.345 05:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.604 nvme0n1 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.604 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.863 nvme0n1 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.863 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.121 nvme0n1 00:33:02.121 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.121 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.121 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.121 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.121 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.121 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.380 05:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.639 nvme0n1 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.639 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.640 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.640 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:02.640 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.640 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.206 nvme0n1 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.206 05:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.772 nvme0n1 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:33:03.772 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.773 05:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.339 nvme0n1 00:33:04.339 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.339 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.339 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.339 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.339 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.597 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.598 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.164 nvme0n1 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.164 05:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.730 nvme0n1 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQyNGZhYWFmMTJjMzA1MzI5OTkyODgyNDMwMDQ0MGGZGz2B: 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: ]] 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGI4M2Y4NWNiYTkyZmRiNjNjODI5OWEzZDFlYzAzZDI1N2VhZjI2OGQzZWJhZWIxOWIyY2IxMDhkNjY1ZWY2OPc+r68=: 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.730 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.731 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.731 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:05.731 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.731 05:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.665 nvme0n1 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.665 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.666 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.666 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.666 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.666 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.666 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.666 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:06.666 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.666 05:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.599 nvme0n1 00:33:07.599 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.599 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.599 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.599 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.599 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.599 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFhYzhmOTc2NmNhYmM3YjQyODc2MDk3ZjgzMDE0MDlFlmnw: 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: ]] 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDIwZDA3NmNmMTUwN2ZjOWFmYTQwNjVmN2UzZDE1OGJABx8o: 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.857 05:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.791 nvme0n1 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBjZTAyOGM1OGU1M2YwYzIzMzg5OTgzMzRjZmU2MzBhNmJkMDQ4ZTcxNDM5YjFhFztPbw==: 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: ]] 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjFmN2ZhZWM5NjZkNDYwNGUxYjhjNzBhYWE3Mzc5ZWEtCWUn: 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.791 05:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.724 nvme0n1 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Njk4YTczZWEyNGM4NGNkODY5ZmU4YzNkZmZjNGUxNDRlNGJjYzgyMmM4NzExZGVhZGYwYzU5ZDMyNGYzZjE4Yy5EMT8=: 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.724 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.981 05:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.913 nvme0n1 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:10.913 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZDQ2ZmMyZWM4MmEzMjAwMDA5NjJlZjZiZTVlZWYxNzI3NTQ2MDJkMTQ0MGRmj5GkHw==: 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWUyMzA5M2JkZjY3MGI1Mzc0ODEzYWNkYzZhYmVkZjc3OGRkYjI1MzM5NGEzZDMySkv1Rw==: 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.914 request: 00:33:10.914 { 00:33:10.914 "name": "nvme0", 00:33:10.914 "trtype": "tcp", 00:33:10.914 "traddr": "10.0.0.1", 00:33:10.914 "adrfam": "ipv4", 00:33:10.914 "trsvcid": "4420", 00:33:10.914 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:10.914 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:10.914 "prchk_reftag": false, 00:33:10.914 "prchk_guard": false, 00:33:10.914 "hdgst": false, 00:33:10.914 "ddgst": false, 00:33:10.914 "method": "bdev_nvme_attach_controller", 00:33:10.914 "req_id": 1 00:33:10.914 } 00:33:10.914 Got JSON-RPC error response 00:33:10.914 response: 00:33:10.914 { 00:33:10.914 "code": -5, 00:33:10.914 "message": "Input/output error" 00:33:10.914 } 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.914 request: 00:33:10.914 { 00:33:10.914 "name": "nvme0", 00:33:10.914 "trtype": "tcp", 00:33:10.914 "traddr": "10.0.0.1", 00:33:10.914 "adrfam": "ipv4", 00:33:10.914 "trsvcid": "4420", 00:33:10.914 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:10.914 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:10.914 "prchk_reftag": false, 00:33:10.914 "prchk_guard": false, 00:33:10.914 "hdgst": false, 00:33:10.914 "ddgst": false, 00:33:10.914 "dhchap_key": "key2", 00:33:10.914 "method": "bdev_nvme_attach_controller", 00:33:10.914 "req_id": 1 00:33:10.914 } 00:33:10.914 Got JSON-RPC error response 00:33:10.914 response: 00:33:10.914 { 00:33:10.914 "code": -5, 00:33:10.914 "message": "Input/output error" 00:33:10.914 } 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.914 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.915 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.915 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:10.915 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:33:10.915 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:10.915 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:10.915 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:10.915 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:10.915 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:10.915 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:10.915 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.915 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.174 request: 00:33:11.174 { 00:33:11.174 "name": "nvme0", 00:33:11.174 "trtype": "tcp", 00:33:11.174 "traddr": "10.0.0.1", 00:33:11.174 "adrfam": "ipv4", 00:33:11.174 "trsvcid": "4420", 00:33:11.174 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:11.174 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:11.174 "prchk_reftag": false, 00:33:11.174 "prchk_guard": false, 00:33:11.174 "hdgst": false, 00:33:11.174 "ddgst": false, 00:33:11.174 "dhchap_key": "key1", 00:33:11.174 "dhchap_ctrlr_key": "ckey2", 00:33:11.174 "method": "bdev_nvme_attach_controller", 00:33:11.174 "req_id": 1 00:33:11.174 } 00:33:11.174 Got JSON-RPC error response 00:33:11.174 response: 00:33:11.174 { 00:33:11.174 "code": -5, 00:33:11.174 "message": "Input/output error" 00:33:11.174 } 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:11.174 rmmod nvme_tcp 00:33:11.174 rmmod nvme_fabrics 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1762390 ']' 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1762390 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1762390 ']' 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1762390 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1762390 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1762390' 00:33:11.174 killing process with pid 1762390 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1762390 00:33:11.174 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1762390 00:33:11.461 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:11.461 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:11.461 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:11.461 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:11.461 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:11.461 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.461 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:11.461 05:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.362 05:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:13.362 05:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:13.362 05:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:13.362 05:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:13.362 05:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:13.362 05:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:13.362 05:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:13.362 05:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:13.362 05:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:13.362 05:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:13.362 05:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:13.362 05:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:13.621 05:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:14.994 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:14.994 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:14.994 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:14.994 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:14.994 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:14.994 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:14.994 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:14.994 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:14.994 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:14.994 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:14.994 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:14.995 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:14.995 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:14.995 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:14.995 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:14.995 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:15.561 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:15.819 05:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.xSl /tmp/spdk.key-null.Iyu /tmp/spdk.key-sha256.i9K /tmp/spdk.key-sha384.ry5 /tmp/spdk.key-sha512.oCw /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:15.819 05:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:16.754 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:16.754 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:16.754 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:16.754 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:16.754 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:16.754 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:16.754 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:16.754 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:16.754 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:16.754 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:16.754 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:16.754 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:16.754 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:16.754 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:16.754 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:16.754 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:17.012 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:17.012 00:33:17.012 real 0m49.681s 00:33:17.012 user 0m47.485s 00:33:17.012 sys 0m5.772s 00:33:17.012 05:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:17.012 05:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.012 ************************************ 00:33:17.012 END TEST nvmf_auth_host 00:33:17.012 ************************************ 00:33:17.012 05:53:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:33:17.013 05:53:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:17.013 05:53:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:17.013 05:53:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:17.013 05:53:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.013 ************************************ 00:33:17.013 START TEST nvmf_digest 00:33:17.013 ************************************ 00:33:17.013 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:17.013 * Looking for test storage... 00:33:17.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.271 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:17.272 05:53:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:19.173 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:19.173 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:19.173 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:19.173 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:19.174 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:19.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:19.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:33:19.174 00:33:19.174 --- 10.0.0.2 ping statistics --- 00:33:19.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.174 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:19.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:19.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:33:19.174 00:33:19.174 --- 10.0.0.1 ping statistics --- 00:33:19.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.174 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:19.174 ************************************ 00:33:19.174 START TEST nvmf_digest_clean 00:33:19.174 ************************************ 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1771833 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1771833 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1771833 ']' 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:19.174 05:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.432 [2024-07-25 05:53:12.908204] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:33:19.433 [2024-07-25 05:53:12.908303] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.433 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.433 [2024-07-25 05:53:12.970545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.433 [2024-07-25 05:53:13.053827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:19.433 [2024-07-25 05:53:13.053879] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:19.433 [2024-07-25 05:53:13.053901] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:19.433 [2024-07-25 05:53:13.053912] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:19.433 [2024-07-25 05:53:13.053922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:19.433 [2024-07-25 05:53:13.053946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.433 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:19.433 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:19.433 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:19.433 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:19.433 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.692 null0 00:33:19.692 [2024-07-25 05:53:13.261703] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:19.692 [2024-07-25 05:53:13.285958] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1771864 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1771864 /var/tmp/bperf.sock 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1771864 ']' 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:19.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:19.692 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.692 [2024-07-25 05:53:13.337312] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:33:19.692 [2024-07-25 05:53:13.337401] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771864 ] 00:33:19.692 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.949 [2024-07-25 05:53:13.403701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.949 [2024-07-25 05:53:13.493649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.949 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:19.949 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:19.949 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:19.949 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:19.949 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:20.515 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:20.515 05:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:20.773 nvme0n1 00:33:20.773 05:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:20.773 05:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:20.773 Running I/O for 2 seconds... 00:33:23.301 00:33:23.301 Latency(us) 00:33:23.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.301 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:23.301 nvme0n1 : 2.01 18567.93 72.53 0.00 0.00 6884.39 3786.52 20971.52 00:33:23.301 =================================================================================================================== 00:33:23.301 Total : 18567.93 72.53 0.00 0.00 6884.39 3786.52 20971.52 00:33:23.301 0 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:23.301 | select(.opcode=="crc32c") 00:33:23.301 | "\(.module_name) \(.executed)"' 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1771864 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1771864 ']' 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1771864 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1771864 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1771864' 00:33:23.301 killing process with pid 1771864 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1771864 00:33:23.301 Received shutdown signal, test time was about 2.000000 seconds 00:33:23.301 00:33:23.301 Latency(us) 00:33:23.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.301 =================================================================================================================== 00:33:23.301 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1771864 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1772267 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1772267 /var/tmp/bperf.sock 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1772267 ']' 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:23.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:23.301 05:53:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:23.301 [2024-07-25 05:53:16.956668] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:33:23.301 [2024-07-25 05:53:16.956762] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772267 ] 00:33:23.301 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:23.301 Zero copy mechanism will not be used. 00:33:23.301 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.559 [2024-07-25 05:53:17.019191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.560 [2024-07-25 05:53:17.112773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.560 05:53:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:23.560 05:53:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:23.560 05:53:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:23.560 05:53:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:23.560 05:53:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:23.817 05:53:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:23.818 05:53:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:24.383 nvme0n1 00:33:24.383 05:53:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:24.383 05:53:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:24.383 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:24.383 Zero copy mechanism will not be used. 00:33:24.383 Running I/O for 2 seconds... 00:33:26.911 00:33:26.911 Latency(us) 00:33:26.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.911 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:26.911 nvme0n1 : 2.00 3416.54 427.07 0.00 0.00 4678.61 4441.88 12281.93 00:33:26.911 =================================================================================================================== 00:33:26.911 Total : 3416.54 427.07 0.00 0.00 4678.61 4441.88 12281.93 00:33:26.911 0 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:26.911 | select(.opcode=="crc32c") 00:33:26.911 | "\(.module_name) \(.executed)"' 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1772267 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1772267 ']' 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1772267 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1772267 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1772267' 00:33:26.911 killing process with pid 1772267 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1772267 00:33:26.911 Received shutdown signal, test time was about 2.000000 seconds 00:33:26.911 00:33:26.911 Latency(us) 00:33:26.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.911 =================================================================================================================== 00:33:26.911 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1772267 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1772673 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1772673 /var/tmp/bperf.sock 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1772673 ']' 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:26.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:26.911 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:27.169 [2024-07-25 05:53:20.622277] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:33:27.169 [2024-07-25 05:53:20.622378] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772673 ] 00:33:27.169 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.169 [2024-07-25 05:53:20.688077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.169 [2024-07-25 05:53:20.784603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.169 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:27.169 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:27.169 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:27.169 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:27.169 05:53:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:27.735 05:53:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.735 05:53:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.993 nvme0n1 00:33:27.993 05:53:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:27.993 05:53:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:28.250 Running I/O for 2 seconds... 00:33:30.191 00:33:30.191 Latency(us) 00:33:30.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.191 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:30.191 nvme0n1 : 2.01 20516.81 80.14 0.00 0.00 6227.99 3470.98 17282.09 00:33:30.191 =================================================================================================================== 00:33:30.191 Total : 20516.81 80.14 0.00 0.00 6227.99 3470.98 17282.09 00:33:30.191 0 00:33:30.191 05:53:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:30.191 05:53:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:30.191 05:53:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:30.191 05:53:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:30.191 | select(.opcode=="crc32c") 00:33:30.191 | "\(.module_name) \(.executed)"' 00:33:30.191 05:53:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1772673 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1772673 ']' 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1772673 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1772673 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1772673' 00:33:30.449 killing process with pid 1772673 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1772673 00:33:30.449 Received shutdown signal, test time was about 2.000000 seconds 00:33:30.449 00:33:30.449 Latency(us) 00:33:30.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.449 =================================================================================================================== 00:33:30.449 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:30.449 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1772673 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1773202 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1773202 /var/tmp/bperf.sock 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1773202 ']' 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:30.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:30.707 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:30.707 [2024-07-25 05:53:24.332627] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:33:30.707 [2024-07-25 05:53:24.332705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773202 ] 00:33:30.707 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:30.707 Zero copy mechanism will not be used. 00:33:30.707 EAL: No free 2048 kB hugepages reported on node 1 00:33:30.707 [2024-07-25 05:53:24.394477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.965 [2024-07-25 05:53:24.485945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.965 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:30.965 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:30.965 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:30.965 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:30.965 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:31.223 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.223 05:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.789 nvme0n1 00:33:31.789 05:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:31.789 05:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:31.789 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:31.789 Zero copy mechanism will not be used. 00:33:31.789 Running I/O for 2 seconds... 00:33:34.314 00:33:34.314 Latency(us) 00:33:34.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.314 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:34.314 nvme0n1 : 2.01 2965.06 370.63 0.00 0.00 5384.12 4126.34 15437.37 00:33:34.314 =================================================================================================================== 00:33:34.315 Total : 2965.06 370.63 0.00 0.00 5384.12 4126.34 15437.37 00:33:34.315 0 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:34.315 | select(.opcode=="crc32c") 00:33:34.315 | "\(.module_name) \(.executed)"' 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1773202 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1773202 ']' 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1773202 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1773202 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1773202' 00:33:34.315 killing process with pid 1773202 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1773202 00:33:34.315 Received shutdown signal, test time was about 2.000000 seconds 00:33:34.315 00:33:34.315 Latency(us) 00:33:34.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.315 =================================================================================================================== 00:33:34.315 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1773202 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1771833 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1771833 ']' 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1771833 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1771833 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1771833' 00:33:34.315 killing process with pid 1771833 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1771833 00:33:34.315 05:53:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1771833 00:33:34.572 00:33:34.572 real 0m15.280s 00:33:34.572 user 0m30.723s 00:33:34.572 sys 0m3.990s 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.572 ************************************ 00:33:34.572 END TEST nvmf_digest_clean 00:33:34.572 ************************************ 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:34.572 ************************************ 00:33:34.572 START TEST nvmf_digest_error 00:33:34.572 ************************************ 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1773635 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1773635 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1773635 ']' 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:34.572 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:34.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:34.573 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:34.573 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.573 [2024-07-25 05:53:28.243507] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:33:34.573 [2024-07-25 05:53:28.243597] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:34.830 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.830 [2024-07-25 05:53:28.308770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.830 [2024-07-25 05:53:28.395511] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:34.830 [2024-07-25 05:53:28.395573] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:34.830 [2024-07-25 05:53:28.395588] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:34.830 [2024-07-25 05:53:28.395600] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:34.830 [2024-07-25 05:53:28.395609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:34.830 [2024-07-25 05:53:28.395636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.830 [2024-07-25 05:53:28.480271] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.830 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.088 null0 00:33:35.088 [2024-07-25 05:53:28.599368] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:35.088 [2024-07-25 05:53:28.623653] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1773662 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1773662 /var/tmp/bperf.sock 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1773662 ']' 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:35.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:35.088 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.088 [2024-07-25 05:53:28.671977] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:33:35.088 [2024-07-25 05:53:28.672040] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773662 ] 00:33:35.088 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.088 [2024-07-25 05:53:28.733499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.345 [2024-07-25 05:53:28.825526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.345 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:35.345 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:35.345 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:35.345 05:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:35.602 05:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:35.602 05:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.602 05:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.602 05:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.602 05:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.602 05:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.167 nvme0n1 00:33:36.167 05:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:36.167 05:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.167 05:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.167 05:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.167 05:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:36.167 05:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:36.167 Running I/O for 2 seconds... 00:33:36.167 [2024-07-25 05:53:29.820436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.167 [2024-07-25 05:53:29.820488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.167 [2024-07-25 05:53:29.820510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.167 [2024-07-25 05:53:29.834619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.167 [2024-07-25 05:53:29.834668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.167 [2024-07-25 05:53:29.834685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.167 [2024-07-25 05:53:29.846349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.167 [2024-07-25 05:53:29.846382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.167 [2024-07-25 05:53:29.846400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.167 [2024-07-25 05:53:29.860240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.167 [2024-07-25 05:53:29.860281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.167 [2024-07-25 05:53:29.860298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.425 [2024-07-25 05:53:29.873052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.425 [2024-07-25 05:53:29.873084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.425 [2024-07-25 05:53:29.873101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.425 [2024-07-25 05:53:29.885677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.425 [2024-07-25 05:53:29.885719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.425 [2024-07-25 05:53:29.885738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.425 [2024-07-25 05:53:29.899154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.425 [2024-07-25 05:53:29.899200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.425 [2024-07-25 05:53:29.899217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.425 [2024-07-25 05:53:29.909835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.425 [2024-07-25 05:53:29.909864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.425 [2024-07-25 05:53:29.909880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.425 [2024-07-25 05:53:29.923115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.425 [2024-07-25 05:53:29.923147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.425 [2024-07-25 05:53:29.923165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.425 [2024-07-25 05:53:29.935283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.425 [2024-07-25 05:53:29.935321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.425 [2024-07-25 05:53:29.935339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.425 [2024-07-25 05:53:29.947961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.425 [2024-07-25 05:53:29.947990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.425 [2024-07-25 05:53:29.948006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.426 [2024-07-25 05:53:29.961426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.426 [2024-07-25 05:53:29.961457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.426 [2024-07-25 05:53:29.961476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.426 [2024-07-25 05:53:29.974451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.426 [2024-07-25 05:53:29.974482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.426 [2024-07-25 05:53:29.974499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.426 [2024-07-25 05:53:29.984755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.426 [2024-07-25 05:53:29.984784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.426 [2024-07-25 05:53:29.984800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.426 [2024-07-25 05:53:29.998424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.426 [2024-07-25 05:53:29.998456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.426 [2024-07-25 05:53:29.998473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.426 [2024-07-25 05:53:30.013538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.426 [2024-07-25 05:53:30.013589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.426 [2024-07-25 05:53:30.013610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.426 [2024-07-25 05:53:30.025081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.426 [2024-07-25 05:53:30.025128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.426 [2024-07-25 05:53:30.025147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.426 [2024-07-25 05:53:30.040106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.426 [2024-07-25 05:53:30.040139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.426 [2024-07-25 05:53:30.040158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.426 [2024-07-25 05:53:30.051991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.426 [2024-07-25 05:53:30.052023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.426 [2024-07-25 05:53:30.052054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.426 [2024-07-25 05:53:30.066723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.426 [2024-07-25 05:53:30.066752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.426 [2024-07-25 05:53:30.066767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.426 [2024-07-25 05:53:30.078821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.426 [2024-07-25 05:53:30.078850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.426 [2024-07-25 05:53:30.078866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.426 [2024-07-25 05:53:30.091189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.426 [2024-07-25 05:53:30.091237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.426 [2024-07-25 05:53:30.091265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.426 [2024-07-25 05:53:30.105012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.426 [2024-07-25 05:53:30.105059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.426 [2024-07-25 05:53:30.105094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.426 [2024-07-25 05:53:30.117923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.426 [2024-07-25 05:53:30.117954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.426 [2024-07-25 05:53:30.117972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.129800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.129830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.129846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.146072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.146101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.146117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.159677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.159707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.159724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.171342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.171388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.171405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.185151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.185180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.185196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.198830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.198862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.198879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.210214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.210250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.210282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.224580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.224618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.224636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.236571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.236617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.236635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.248881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.248909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.248924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.262518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.262564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.262580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.274895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.274923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.274939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.286840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.286871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.286888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.298752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.298784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.298802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.313438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.313476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.313493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.325791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.325819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.325850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.338014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.338060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.338077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.351434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.351468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.351500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.365603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.365632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.684 [2024-07-25 05:53:30.365649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.684 [2024-07-25 05:53:30.377395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.684 [2024-07-25 05:53:30.377427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.685 [2024-07-25 05:53:30.377445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.390824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.390856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.390874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.402349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.402380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.402397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.415575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.415606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.415624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.426633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.426664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.426682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.439322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.439358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.439376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.451499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.451529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.451546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.465396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.465427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.465445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.476329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.476360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.476377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.489124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.489154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.489185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.503372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.503404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.503421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.515490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.515520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.515538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.527752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.527783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.527800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.539591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.539636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.539653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.551965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.551994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.552010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.564205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.564236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.564261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.577516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.577562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.577582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.593270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.593315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.593332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.606184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.606221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.606250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.621710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.621746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.621765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.943 [2024-07-25 05:53:30.635706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:36.943 [2024-07-25 05:53:30.635741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.943 [2024-07-25 05:53:30.635761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.201 [2024-07-25 05:53:30.648427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.201 [2024-07-25 05:53:30.648458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.201 [2024-07-25 05:53:30.648475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.201 [2024-07-25 05:53:30.664354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.201 [2024-07-25 05:53:30.664383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.201 [2024-07-25 05:53:30.664405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.201 [2024-07-25 05:53:30.676802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.201 [2024-07-25 05:53:30.676836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.201 [2024-07-25 05:53:30.676857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.201 [2024-07-25 05:53:30.690504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.201 [2024-07-25 05:53:30.690549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.201 [2024-07-25 05:53:30.690569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.201 [2024-07-25 05:53:30.704042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.201 [2024-07-25 05:53:30.704076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.201 [2024-07-25 05:53:30.704096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.201 [2024-07-25 05:53:30.716686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.201 [2024-07-25 05:53:30.716721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.201 [2024-07-25 05:53:30.716739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.202 [2024-07-25 05:53:30.728765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.202 [2024-07-25 05:53:30.728799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.202 [2024-07-25 05:53:30.728818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.202 [2024-07-25 05:53:30.744471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.202 [2024-07-25 05:53:30.744516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.202 [2024-07-25 05:53:30.744533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.202 [2024-07-25 05:53:30.756886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.202 [2024-07-25 05:53:30.756919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.202 [2024-07-25 05:53:30.756939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.202 [2024-07-25 05:53:30.771553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.202 [2024-07-25 05:53:30.771588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.202 [2024-07-25 05:53:30.771606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.202 [2024-07-25 05:53:30.786852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.202 [2024-07-25 05:53:30.786892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.202 [2024-07-25 05:53:30.786912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.202 [2024-07-25 05:53:30.798395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.202 [2024-07-25 05:53:30.798439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.202 [2024-07-25 05:53:30.798455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.202 [2024-07-25 05:53:30.813403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.202 [2024-07-25 05:53:30.813432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.202 [2024-07-25 05:53:30.813448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.202 [2024-07-25 05:53:30.826364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.202 [2024-07-25 05:53:30.826396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.202 [2024-07-25 05:53:30.826413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.202 [2024-07-25 05:53:30.839715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.202 [2024-07-25 05:53:30.839749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.202 [2024-07-25 05:53:30.839767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.202 [2024-07-25 05:53:30.854292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.202 [2024-07-25 05:53:30.854324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.202 [2024-07-25 05:53:30.854356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.202 [2024-07-25 05:53:30.870812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.202 [2024-07-25 05:53:30.870850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.202 [2024-07-25 05:53:30.870870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.202 [2024-07-25 05:53:30.882490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.202 [2024-07-25 05:53:30.882538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.202 [2024-07-25 05:53:30.882559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.202 [2024-07-25 05:53:30.896259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.202 [2024-07-25 05:53:30.896305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.202 [2024-07-25 05:53:30.896322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:30.910800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:30.910834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:30.910854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:30.922801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:30.922835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:30.922856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:30.936858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:30.936893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:30.936912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:30.952757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:30.952793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:30.952813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:30.966507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:30.966539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:30.966573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:30.978663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:30.978698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:30.978716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:30.993776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:30.993811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:30.993829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:31.005842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:31.005876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:31.005895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:31.018946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:31.018981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:31.019006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:31.034139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:31.034172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:31.034192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:31.048922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:31.048958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:31.048978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:31.061386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:31.061416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:31.061433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:31.077079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:31.077113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:31.077133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:31.094100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:31.094134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:31.094153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:31.105979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.460 [2024-07-25 05:53:31.106013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.460 [2024-07-25 05:53:31.106031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.460 [2024-07-25 05:53:31.120344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.461 [2024-07-25 05:53:31.120376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.461 [2024-07-25 05:53:31.120392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.461 [2024-07-25 05:53:31.132029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.461 [2024-07-25 05:53:31.132065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.461 [2024-07-25 05:53:31.132084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.461 [2024-07-25 05:53:31.147425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.461 [2024-07-25 05:53:31.147473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.461 [2024-07-25 05:53:31.147492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.718 [2024-07-25 05:53:31.162097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.718 [2024-07-25 05:53:31.162132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.718 [2024-07-25 05:53:31.162152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.718 [2024-07-25 05:53:31.175748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.718 [2024-07-25 05:53:31.175784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.718 [2024-07-25 05:53:31.175803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.718 [2024-07-25 05:53:31.188971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.718 [2024-07-25 05:53:31.189006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.718 [2024-07-25 05:53:31.189026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.718 [2024-07-25 05:53:31.202046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.718 [2024-07-25 05:53:31.202081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.718 [2024-07-25 05:53:31.202100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.718 [2024-07-25 05:53:31.215146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.215180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.215199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.228530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.228586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.228606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.242936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.242971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.242990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.256760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.256794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.256818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.271684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.271718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.271736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.283410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.283440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.283457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.298681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.298716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.298735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.312891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.312925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.312945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.325362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.325391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.325408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.341633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.341667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.341687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.357289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.357327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.357344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.371495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.371526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.371557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.387862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.387904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.387925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.400781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.400816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.400835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.719 [2024-07-25 05:53:31.416385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.719 [2024-07-25 05:53:31.416417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.719 [2024-07-25 05:53:31.416434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.977 [2024-07-25 05:53:31.429373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.977 [2024-07-25 05:53:31.429401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.977 [2024-07-25 05:53:31.429417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.977 [2024-07-25 05:53:31.443883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.977 [2024-07-25 05:53:31.443918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.977 [2024-07-25 05:53:31.443938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.977 [2024-07-25 05:53:31.457633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.977 [2024-07-25 05:53:31.457667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.977 [2024-07-25 05:53:31.457687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.977 [2024-07-25 05:53:31.471403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.977 [2024-07-25 05:53:31.471434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.977 [2024-07-25 05:53:31.471466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.977 [2024-07-25 05:53:31.483223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.977 [2024-07-25 05:53:31.483267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.977 [2024-07-25 05:53:31.483300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.977 [2024-07-25 05:53:31.497656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.977 [2024-07-25 05:53:31.497689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.977 [2024-07-25 05:53:31.497708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.977 [2024-07-25 05:53:31.509602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.977 [2024-07-25 05:53:31.509637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.977 [2024-07-25 05:53:31.509656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.977 [2024-07-25 05:53:31.523971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.977 [2024-07-25 05:53:31.524005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.977 [2024-07-25 05:53:31.524024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.977 [2024-07-25 05:53:31.540313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.977 [2024-07-25 05:53:31.540349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.977 [2024-07-25 05:53:31.540366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.977 [2024-07-25 05:53:31.553431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.977 [2024-07-25 05:53:31.553460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.977 [2024-07-25 05:53:31.553476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.977 [2024-07-25 05:53:31.567638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.977 [2024-07-25 05:53:31.567672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.977 [2024-07-25 05:53:31.567691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.977 [2024-07-25 05:53:31.581178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.977 [2024-07-25 05:53:31.581216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.978 [2024-07-25 05:53:31.581236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.978 [2024-07-25 05:53:31.593867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.978 [2024-07-25 05:53:31.593901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.978 [2024-07-25 05:53:31.593920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.978 [2024-07-25 05:53:31.607398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.978 [2024-07-25 05:53:31.607429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.978 [2024-07-25 05:53:31.607446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.978 [2024-07-25 05:53:31.619574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.978 [2024-07-25 05:53:31.619607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.978 [2024-07-25 05:53:31.619632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.978 [2024-07-25 05:53:31.634253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.978 [2024-07-25 05:53:31.634296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.978 [2024-07-25 05:53:31.634313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.978 [2024-07-25 05:53:31.645199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.978 [2024-07-25 05:53:31.645250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.978 [2024-07-25 05:53:31.645271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.978 [2024-07-25 05:53:31.660237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.978 [2024-07-25 05:53:31.660273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.978 [2024-07-25 05:53:31.660289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.978 [2024-07-25 05:53:31.673768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:37.978 [2024-07-25 05:53:31.673798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.978 [2024-07-25 05:53:31.673830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.236 [2024-07-25 05:53:31.687711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:38.236 [2024-07-25 05:53:31.687741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.236 [2024-07-25 05:53:31.687758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.236 [2024-07-25 05:53:31.699204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:38.236 [2024-07-25 05:53:31.699235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.236 [2024-07-25 05:53:31.699259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.236 [2024-07-25 05:53:31.715292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:38.236 [2024-07-25 05:53:31.715320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.236 [2024-07-25 05:53:31.715336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.236 [2024-07-25 05:53:31.726512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:38.236 [2024-07-25 05:53:31.726555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.236 [2024-07-25 05:53:31.726571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.236 [2024-07-25 05:53:31.740515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:38.236 [2024-07-25 05:53:31.740545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.236 [2024-07-25 05:53:31.740562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.236 [2024-07-25 05:53:31.755197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:38.236 [2024-07-25 05:53:31.755250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.236 [2024-07-25 05:53:31.755269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.236 [2024-07-25 05:53:31.766361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:38.236 [2024-07-25 05:53:31.766389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.236 [2024-07-25 05:53:31.766405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.236 [2024-07-25 05:53:31.781652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:38.236 [2024-07-25 05:53:31.781698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.236 [2024-07-25 05:53:31.781715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.236 [2024-07-25 05:53:31.792728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:38.236 [2024-07-25 05:53:31.792755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.236 [2024-07-25 05:53:31.792786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.236 [2024-07-25 05:53:31.806854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259cc0) 00:33:38.236 [2024-07-25 05:53:31.806881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.236 [2024-07-25 05:53:31.806911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.236 00:33:38.236 Latency(us) 00:33:38.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.236 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:38.236 nvme0n1 : 2.01 18943.30 74.00 0.00 0.00 6748.45 3616.62 19029.71 00:33:38.236 =================================================================================================================== 00:33:38.236 Total : 18943.30 74.00 0.00 0.00 6748.45 3616.62 19029.71 00:33:38.236 0 00:33:38.236 05:53:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:38.236 05:53:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:38.236 05:53:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:38.236 05:53:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:38.236 | .driver_specific 00:33:38.236 | .nvme_error 00:33:38.236 | .status_code 00:33:38.237 | .command_transient_transport_error' 00:33:38.495 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:33:38.495 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1773662 00:33:38.495 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1773662 ']' 00:33:38.495 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1773662 00:33:38.495 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:38.495 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:38.495 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1773662 00:33:38.495 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:38.495 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:38.495 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1773662' 00:33:38.495 killing process with pid 1773662 00:33:38.495 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1773662 00:33:38.495 Received shutdown signal, test time was about 2.000000 seconds 00:33:38.495 00:33:38.495 Latency(us) 00:33:38.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.495 =================================================================================================================== 00:33:38.495 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:38.495 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1773662 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1774185 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1774185 /var/tmp/bperf.sock 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1774185 ']' 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:38.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:38.753 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.753 [2024-07-25 05:53:32.360100] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:33:38.753 [2024-07-25 05:53:32.360197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774185 ] 00:33:38.753 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:38.753 Zero copy mechanism will not be used. 00:33:38.753 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.753 [2024-07-25 05:53:32.418469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.012 [2024-07-25 05:53:32.506926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.012 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:39.012 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:39.012 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:39.012 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:39.269 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:39.269 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.269 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.269 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.269 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:39.269 05:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:39.527 nvme0n1 00:33:39.527 05:53:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:39.527 05:53:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.527 05:53:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.527 05:53:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.527 05:53:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:39.527 05:53:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:39.795 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:39.795 Zero copy mechanism will not be used. 00:33:39.795 Running I/O for 2 seconds... 00:33:39.795 [2024-07-25 05:53:33.309466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.309516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.309542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.318938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.318974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.318993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.328280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.328327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.328369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.338296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.338330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.338347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.348649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.348685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.348704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.359484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.359515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.359532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.370185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.370220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.370240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.380775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.380810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.380829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.391090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.391124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.391143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.401910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.401944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.401964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.411481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.411513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.411530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.421239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.421285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.421305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.431329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.431374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.431390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.442016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.442051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.442070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.453511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.453542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.453579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.464360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.464390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.464406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.476062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.476098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.476117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.795 [2024-07-25 05:53:33.488165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:39.795 [2024-07-25 05:53:33.488200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-07-25 05:53:33.488218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.055 [2024-07-25 05:53:33.499383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.055 [2024-07-25 05:53:33.499414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.055 [2024-07-25 05:53:33.499430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.510874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.510909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.510938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.520124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.520158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.520177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.529592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.529625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.529643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.539357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.539402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.539418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.548754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.548788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.548806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.558104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.558137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.558155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.567521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.567550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.567582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.576595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.576632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.576651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.586100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.586134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.586153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.595517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.595572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.595592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.605266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.605311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.605328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.614882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.614916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.614934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.624311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.624356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.624373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.633789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.633821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.633840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.643164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.643196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.643214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.652567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.652613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.652632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.662026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.662058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.662077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.671321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.671366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.671382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.680652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.680685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.680702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.690030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.690063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.690082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.699482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.699523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.699555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.709587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.709622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.709642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.718974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.719008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.719027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.728353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.728398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.728415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.737767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.737800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.737818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.747259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.747305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.747322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.056 [2024-07-25 05:53:33.756942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.056 [2024-07-25 05:53:33.756976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.056 [2024-07-25 05:53:33.757001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.314 [2024-07-25 05:53:33.766586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.314 [2024-07-25 05:53:33.766619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.314 [2024-07-25 05:53:33.766638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.314 [2024-07-25 05:53:33.775995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.314 [2024-07-25 05:53:33.776028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.314 [2024-07-25 05:53:33.776046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.314 [2024-07-25 05:53:33.785425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.314 [2024-07-25 05:53:33.785454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.314 [2024-07-25 05:53:33.785471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.314 [2024-07-25 05:53:33.794705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.314 [2024-07-25 05:53:33.794737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.314 [2024-07-25 05:53:33.794755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.314 [2024-07-25 05:53:33.804107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.804139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.804157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.813463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.813492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.813508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.822873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.822905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.822922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.832777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.832815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.832834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.842507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.842543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.842561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.853658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.853694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.853713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.865549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.865601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.865620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.876814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.876849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.876868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.888297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.888329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.888346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.899765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.899800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.899819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.911738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.911772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.911791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.923229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.923271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.923292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.934879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.934914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.934932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.946588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.946623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.946642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.958070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.958105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.958124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.969336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.969367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.969384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.980791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.980826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.980844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:33.992186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:33.992220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:33.992238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:34.003592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:34.003627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:34.003646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.315 [2024-07-25 05:53:34.013383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.315 [2024-07-25 05:53:34.013432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.315 [2024-07-25 05:53:34.013449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.573 [2024-07-25 05:53:34.023917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.573 [2024-07-25 05:53:34.023952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.573 [2024-07-25 05:53:34.023970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.573 [2024-07-25 05:53:34.034029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.573 [2024-07-25 05:53:34.034061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.573 [2024-07-25 05:53:34.034085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.573 [2024-07-25 05:53:34.044578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.573 [2024-07-25 05:53:34.044612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.573 [2024-07-25 05:53:34.044631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.573 [2024-07-25 05:53:34.054985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.573 [2024-07-25 05:53:34.055020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.573 [2024-07-25 05:53:34.055038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.573 [2024-07-25 05:53:34.064697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.064732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.064750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.074012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.074045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.074063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.083357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.083390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.083407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.092747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.092781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.092800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.102149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.102183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.102201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.111542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.111576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.111594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.121000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.121033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.121051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.130488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.130532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.130547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.140085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.140117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.140135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.149577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.149610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.149628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.158926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.158959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.158977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.168305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.168349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.168365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.177697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.177729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.177748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.187061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.187093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.187111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.196523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.196569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.196593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.205949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.205981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.205999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.215326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.215372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.215388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.224687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.224720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.224738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.234076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.234108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.234126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.243456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.243486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.243517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.252780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.252812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.252830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.262294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.262322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.262352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.574 [2024-07-25 05:53:34.271761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.574 [2024-07-25 05:53:34.271795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.574 [2024-07-25 05:53:34.271813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.281137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.281175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.281194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.290563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.290595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.290613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.300071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.300103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.300121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.309612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.309645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.309663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.319150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.319183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.319201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.328656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.328689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.328707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.338264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.338317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.338335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.347514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.347565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.347581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.356999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.357033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.357051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.366413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.366444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.366460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.375863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.375897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.375917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.385182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.385216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.385235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.394629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.394663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.394681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.403999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.404032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.404051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.413748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.413780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.413798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.423157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.423191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.423209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.432579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.432612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.432630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.442005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.442039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.442062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.451332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.451380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.451395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.460720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.460753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.460771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.470152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.470187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.470206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.479544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.479577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.479594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.489646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.489680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.489699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.499063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.499096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.499114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.833 [2024-07-25 05:53:34.508459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.833 [2024-07-25 05:53:34.508487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.833 [2024-07-25 05:53:34.508503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.834 [2024-07-25 05:53:34.517771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.834 [2024-07-25 05:53:34.517803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.834 [2024-07-25 05:53:34.517821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.834 [2024-07-25 05:53:34.527103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:40.834 [2024-07-25 05:53:34.527141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.834 [2024-07-25 05:53:34.527159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.092 [2024-07-25 05:53:34.536765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.092 [2024-07-25 05:53:34.536798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.092 [2024-07-25 05:53:34.536816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.092 [2024-07-25 05:53:34.546895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.092 [2024-07-25 05:53:34.546930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.092 [2024-07-25 05:53:34.546948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.092 [2024-07-25 05:53:34.558521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.092 [2024-07-25 05:53:34.558553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.092 [2024-07-25 05:53:34.558570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.092 [2024-07-25 05:53:34.569018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.092 [2024-07-25 05:53:34.569052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.092 [2024-07-25 05:53:34.569071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.092 [2024-07-25 05:53:34.580485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.092 [2024-07-25 05:53:34.580531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.092 [2024-07-25 05:53:34.580547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.092 [2024-07-25 05:53:34.592176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.092 [2024-07-25 05:53:34.592212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.092 [2024-07-25 05:53:34.592231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.092 [2024-07-25 05:53:34.603812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.092 [2024-07-25 05:53:34.603847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.092 [2024-07-25 05:53:34.603866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.092 [2024-07-25 05:53:34.615270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.092 [2024-07-25 05:53:34.615320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.092 [2024-07-25 05:53:34.615337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.092 [2024-07-25 05:53:34.626696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.092 [2024-07-25 05:53:34.626731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.092 [2024-07-25 05:53:34.626750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.092 [2024-07-25 05:53:34.638584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.092 [2024-07-25 05:53:34.638619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.092 [2024-07-25 05:53:34.638638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.092 [2024-07-25 05:53:34.650042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.092 [2024-07-25 05:53:34.650076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.092 [2024-07-25 05:53:34.650095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.093 [2024-07-25 05:53:34.661205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.093 [2024-07-25 05:53:34.661240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.093 [2024-07-25 05:53:34.661269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.093 [2024-07-25 05:53:34.672198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.093 [2024-07-25 05:53:34.672232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.093 [2024-07-25 05:53:34.672259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.093 [2024-07-25 05:53:34.684157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.093 [2024-07-25 05:53:34.684192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.093 [2024-07-25 05:53:34.684210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.093 [2024-07-25 05:53:34.694706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.093 [2024-07-25 05:53:34.694741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.093 [2024-07-25 05:53:34.694759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.093 [2024-07-25 05:53:34.706058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.093 [2024-07-25 05:53:34.706092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.093 [2024-07-25 05:53:34.706111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.093 [2024-07-25 05:53:34.716936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.093 [2024-07-25 05:53:34.716972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.093 [2024-07-25 05:53:34.716996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.093 [2024-07-25 05:53:34.728222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.093 [2024-07-25 05:53:34.728263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.093 [2024-07-25 05:53:34.728297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.093 [2024-07-25 05:53:34.739725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.093 [2024-07-25 05:53:34.739759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.093 [2024-07-25 05:53:34.739778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.093 [2024-07-25 05:53:34.749918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.093 [2024-07-25 05:53:34.749952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.093 [2024-07-25 05:53:34.749972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.093 [2024-07-25 05:53:34.759222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.093 [2024-07-25 05:53:34.759263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.093 [2024-07-25 05:53:34.759296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.093 [2024-07-25 05:53:34.768587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.093 [2024-07-25 05:53:34.768619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.093 [2024-07-25 05:53:34.768638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.093 [2024-07-25 05:53:34.778055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.093 [2024-07-25 05:53:34.778088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.093 [2024-07-25 05:53:34.778105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.093 [2024-07-25 05:53:34.787500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.093 [2024-07-25 05:53:34.787530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.093 [2024-07-25 05:53:34.787546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.796828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.796860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.796879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.806331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.806361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.806378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.815730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.815763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.815781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.825116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.825149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.825167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.834657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.834689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.834707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.844079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.844116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.844135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.853496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.853526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.853543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.862830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.862864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.862883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.872207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.872240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.872271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.881656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.881689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.881713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.891604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.891638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.891656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.901038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.901071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.901089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.910526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.910555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.910571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.920085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.920117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.920135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.929506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.929535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.929551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.939636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.939669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.939688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.949142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.949175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.949193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.958614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.958646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.958664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.968070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.968108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.968127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.977583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.977630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.977648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.987018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.987051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.987068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:34.996583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:34.996615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:34.996632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:35.006120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:35.006152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:35.006170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:35.015745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:35.015777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:35.015796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:35.025572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.352 [2024-07-25 05:53:35.025606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.352 [2024-07-25 05:53:35.025624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.352 [2024-07-25 05:53:35.035049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.353 [2024-07-25 05:53:35.035081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.353 [2024-07-25 05:53:35.035099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.353 [2024-07-25 05:53:35.044521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.353 [2024-07-25 05:53:35.044551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.353 [2024-07-25 05:53:35.044567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.611 [2024-07-25 05:53:35.053981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.611 [2024-07-25 05:53:35.054014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.611 [2024-07-25 05:53:35.054032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.611 [2024-07-25 05:53:35.063386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.611 [2024-07-25 05:53:35.063415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.611 [2024-07-25 05:53:35.063432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.611 [2024-07-25 05:53:35.072860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.611 [2024-07-25 05:53:35.072892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.072910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.082384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.082413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.082429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.091728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.091760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.091778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.101230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.101291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.101311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.110584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.110618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.110636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.119965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.119998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.120016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.129328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.129358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.129397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.138875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.138908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.138926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.148287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.148316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.148333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.157705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.157738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.157756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.167105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.167138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.167156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.176536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.176565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.176581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.185991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.186023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.186042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.195357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.195386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.195402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.204726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.204758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.204777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.214116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.214154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.214173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.223525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.223554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.223571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.232964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.232997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.233015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.242277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.242322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.242338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.251698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.251730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.251748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.261112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.261146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.261164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.270669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.270703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.270721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.280207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.280240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.280270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.289623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.289657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.289675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.612 [2024-07-25 05:53:35.299066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22976b0) 00:33:41.612 [2024-07-25 05:53:35.299099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.612 [2024-07-25 05:53:35.299117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.612 00:33:41.612 Latency(us) 00:33:41.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.612 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:41.612 nvme0n1 : 2.00 3131.21 391.40 0.00 0.00 5103.94 1219.70 13010.11 00:33:41.612 =================================================================================================================== 00:33:41.612 Total : 3131.21 391.40 0.00 0.00 5103.94 1219.70 13010.11 00:33:41.612 0 00:33:41.870 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:41.870 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:41.870 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:41.870 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:41.870 | .driver_specific 00:33:41.870 | .nvme_error 00:33:41.870 | .status_code 00:33:41.870 | .command_transient_transport_error' 00:33:42.129 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 202 > 0 )) 00:33:42.129 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1774185 00:33:42.129 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1774185 ']' 00:33:42.129 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1774185 00:33:42.129 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:42.129 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:42.129 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1774185 00:33:42.129 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:42.129 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:42.129 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1774185' 00:33:42.129 killing process with pid 1774185 00:33:42.129 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1774185 00:33:42.129 Received shutdown signal, test time was about 2.000000 seconds 00:33:42.129 00:33:42.129 Latency(us) 00:33:42.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.129 =================================================================================================================== 00:33:42.129 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:42.129 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1774185 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1774586 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1774586 /var/tmp/bperf.sock 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1774586 ']' 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:42.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:42.387 05:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:42.387 [2024-07-25 05:53:35.918257] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:33:42.387 [2024-07-25 05:53:35.918334] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774586 ] 00:33:42.387 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.387 [2024-07-25 05:53:35.979986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.387 [2024-07-25 05:53:36.071124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.645 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:42.645 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:42.645 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:42.645 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:42.902 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:42.902 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.902 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:42.902 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.902 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.902 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:43.161 nvme0n1 00:33:43.161 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:43.161 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.161 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.161 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.161 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:43.161 05:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:43.419 Running I/O for 2 seconds... 00:33:43.419 [2024-07-25 05:53:36.986485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190edd58 00:33:43.419 [2024-07-25 05:53:36.987533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.419 [2024-07-25 05:53:36.987576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:43.419 [2024-07-25 05:53:36.997801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fa3a0 00:33:43.419 [2024-07-25 05:53:36.998864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.419 [2024-07-25 05:53:36.998893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:43.419 [2024-07-25 05:53:37.010154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e3d08 00:33:43.419 [2024-07-25 05:53:37.011298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.419 [2024-07-25 05:53:37.011327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:43.419 [2024-07-25 05:53:37.022516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e49b0 00:33:43.419 [2024-07-25 05:53:37.023825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.419 [2024-07-25 05:53:37.023855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:43.419 [2024-07-25 05:53:37.034848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e7c50 00:33:43.419 [2024-07-25 05:53:37.036299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.419 [2024-07-25 05:53:37.036329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:43.419 [2024-07-25 05:53:37.047028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fa3a0 00:33:43.419 [2024-07-25 05:53:37.048601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.419 [2024-07-25 05:53:37.048630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:43.419 [2024-07-25 05:53:37.057929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f0bc0 00:33:43.419 [2024-07-25 05:53:37.059080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.419 [2024-07-25 05:53:37.059109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:43.419 [2024-07-25 05:53:37.069959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e8088 00:33:43.419 [2024-07-25 05:53:37.070963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.419 [2024-07-25 05:53:37.070992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:43.419 [2024-07-25 05:53:37.083371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f4298 00:33:43.419 [2024-07-25 05:53:37.085218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.419 [2024-07-25 05:53:37.085252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:43.419 [2024-07-25 05:53:37.091587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ea680 00:33:43.420 [2024-07-25 05:53:37.092512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.420 [2024-07-25 05:53:37.092541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:43.420 [2024-07-25 05:53:37.103558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e1710 00:33:43.420 [2024-07-25 05:53:37.104478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.420 [2024-07-25 05:53:37.104506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:43.420 [2024-07-25 05:53:37.114568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f3e60 00:33:43.420 [2024-07-25 05:53:37.115387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.420 [2024-07-25 05:53:37.115415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:43.678 [2024-07-25 05:53:37.127175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190feb58 00:33:43.678 [2024-07-25 05:53:37.128257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.678 [2024-07-25 05:53:37.128285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:43.678 [2024-07-25 05:53:37.140305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e12d8 00:33:43.678 [2024-07-25 05:53:37.141541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.678 [2024-07-25 05:53:37.141569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.678 [2024-07-25 05:53:37.152363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f35f0 00:33:43.678 [2024-07-25 05:53:37.153619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.678 [2024-07-25 05:53:37.153647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.678 [2024-07-25 05:53:37.164635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ea248 00:33:43.678 [2024-07-25 05:53:37.166122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.678 [2024-07-25 05:53:37.166151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:43.678 [2024-07-25 05:53:37.174313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190df118 00:33:43.678 [2024-07-25 05:53:37.175197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.175231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.185164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f1430 00:33:43.679 [2024-07-25 05:53:37.186012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.186039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.198337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e7c50 00:33:43.679 [2024-07-25 05:53:37.199332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.199367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.210385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e73e0 00:33:43.679 [2024-07-25 05:53:37.211505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.211533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.221401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190feb58 00:33:43.679 [2024-07-25 05:53:37.222500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.222528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.234417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e9e10 00:33:43.679 [2024-07-25 05:53:37.235759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.235787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.246494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fe720 00:33:43.679 [2024-07-25 05:53:37.247987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.248014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.257644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f6458 00:33:43.679 [2024-07-25 05:53:37.259156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.259186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.270124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e38d0 00:33:43.679 [2024-07-25 05:53:37.271789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.271819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.281038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e88f8 00:33:43.679 [2024-07-25 05:53:37.282251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.282279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.292732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190eea00 00:33:43.679 [2024-07-25 05:53:37.293880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.293908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.304546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190efae0 00:33:43.679 [2024-07-25 05:53:37.305726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.305754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.316679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ebfd0 00:33:43.679 [2024-07-25 05:53:37.317964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.317997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.329359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f3e60 00:33:43.679 [2024-07-25 05:53:37.330598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.330643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.343701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e0ea0 00:33:43.679 [2024-07-25 05:53:37.345573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.345604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.356932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e7818 00:33:43.679 [2024-07-25 05:53:37.359016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.359048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.365978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ddc00 00:33:43.679 [2024-07-25 05:53:37.366846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.366877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:43.679 [2024-07-25 05:53:37.378355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f0bc0 00:33:43.679 [2024-07-25 05:53:37.379267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.679 [2024-07-25 05:53:37.379322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.391843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f9b30 00:33:43.966 [2024-07-25 05:53:37.392815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.392847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.405109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e99d8 00:33:43.966 [2024-07-25 05:53:37.406354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.406382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.418366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e9e10 00:33:43.966 [2024-07-25 05:53:37.419730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.419761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.431704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f57b0 00:33:43.966 [2024-07-25 05:53:37.433258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.433289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.443500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f9b30 00:33:43.966 [2024-07-25 05:53:37.444563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.444607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.455967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e6300 00:33:43.966 [2024-07-25 05:53:37.457048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.457079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.468675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f20d8 00:33:43.966 [2024-07-25 05:53:37.469758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.469789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.481715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e95a0 00:33:43.966 [2024-07-25 05:53:37.482620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.482651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.494929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e1710 00:33:43.966 [2024-07-25 05:53:37.496032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.496068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.509457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ee190 00:33:43.966 [2024-07-25 05:53:37.511625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.511658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.518546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e6738 00:33:43.966 [2024-07-25 05:53:37.519439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.519469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.530556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f2948 00:33:43.966 [2024-07-25 05:53:37.531414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.531446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.543714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fe720 00:33:43.966 [2024-07-25 05:53:37.544726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.544758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.556979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f6cc8 00:33:43.966 [2024-07-25 05:53:37.558178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.558210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.570183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f7100 00:33:43.966 [2024-07-25 05:53:37.571624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.966 [2024-07-25 05:53:37.571654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:43.966 [2024-07-25 05:53:37.583481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ed4e8 00:33:43.966 [2024-07-25 05:53:37.585008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.967 [2024-07-25 05:53:37.585039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:43.967 [2024-07-25 05:53:37.596667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fe720 00:33:43.967 [2024-07-25 05:53:37.598377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.967 [2024-07-25 05:53:37.598406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:43.967 [2024-07-25 05:53:37.609823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e0ea0 00:33:43.967 [2024-07-25 05:53:37.611715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.967 [2024-07-25 05:53:37.611747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.967 [2024-07-25 05:53:37.621695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f3a28 00:33:43.967 [2024-07-25 05:53:37.623113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.967 [2024-07-25 05:53:37.623144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:43.967 [2024-07-25 05:53:37.633161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f2510 00:33:43.967 [2024-07-25 05:53:37.635055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.967 [2024-07-25 05:53:37.635086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:43.967 [2024-07-25 05:53:37.644358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e5a90 00:33:43.967 [2024-07-25 05:53:37.645227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.967 [2024-07-25 05:53:37.645267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.657185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ecc78 00:33:44.231 [2024-07-25 05:53:37.658102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.658133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.670568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f2948 00:33:44.231 [2024-07-25 05:53:37.671610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.671640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.683993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f7970 00:33:44.231 [2024-07-25 05:53:37.685225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.685263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.697316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f7538 00:33:44.231 [2024-07-25 05:53:37.698707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.698738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.710659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f0ff8 00:33:44.231 [2024-07-25 05:53:37.712175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.712205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.722429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f2948 00:33:44.231 [2024-07-25 05:53:37.723459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.723490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.735211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f4b08 00:33:44.231 [2024-07-25 05:53:37.736107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.736137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.749756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e0a68 00:33:44.231 [2024-07-25 05:53:37.751670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.751702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.763074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fd640 00:33:44.231 [2024-07-25 05:53:37.765165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.765198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.772190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e2c28 00:33:44.231 [2024-07-25 05:53:37.773104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.773139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.784204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f31b8 00:33:44.231 [2024-07-25 05:53:37.785109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.785141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.797535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ebfd0 00:33:44.231 [2024-07-25 05:53:37.798567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.798610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.811660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f6cc8 00:33:44.231 [2024-07-25 05:53:37.812917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.812950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.824715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190eee38 00:33:44.231 [2024-07-25 05:53:37.826096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.826132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.836625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f57b0 00:33:44.231 [2024-07-25 05:53:37.837971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.838002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.849873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e6b70 00:33:44.231 [2024-07-25 05:53:37.851378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.851406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.863026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e9e10 00:33:44.231 [2024-07-25 05:53:37.864705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.864736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.876320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e6300 00:33:44.231 [2024-07-25 05:53:37.878195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.878227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.888125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f4298 00:33:44.231 [2024-07-25 05:53:37.889620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.889653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.899716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f2d80 00:33:44.231 [2024-07-25 05:53:37.901589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.901621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.910595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190dece0 00:33:44.231 [2024-07-25 05:53:37.911468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.231 [2024-07-25 05:53:37.911496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:44.231 [2024-07-25 05:53:37.923768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e5658 00:33:44.231 [2024-07-25 05:53:37.924776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.232 [2024-07-25 05:53:37.924807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:37.937401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f7970 00:33:44.490 [2024-07-25 05:53:37.938620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:37.938657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:37.951592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f92c0 00:33:44.490 [2024-07-25 05:53:37.953002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:37.953033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:37.964147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ed4e8 00:33:44.490 [2024-07-25 05:53:37.965598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:37.965629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:37.977287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e9168 00:33:44.490 [2024-07-25 05:53:37.978842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:37.978872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:37.989302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e1b48 00:33:44.490 [2024-07-25 05:53:37.990804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:37.990835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.002577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190de038 00:33:44.490 [2024-07-25 05:53:38.004263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:38.004310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.015769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fa7d8 00:33:44.490 [2024-07-25 05:53:38.017663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:38.017706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.027688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ec840 00:33:44.490 [2024-07-25 05:53:38.029083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:38.029116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.039193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e99d8 00:33:44.490 [2024-07-25 05:53:38.041059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:38.041090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.050040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e6b70 00:33:44.490 [2024-07-25 05:53:38.050910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:38.050940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.063336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e5a90 00:33:44.490 [2024-07-25 05:53:38.064356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:38.064387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.076493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e73e0 00:33:44.490 [2024-07-25 05:53:38.077681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:38.077712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.089777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e6fa8 00:33:44.490 [2024-07-25 05:53:38.091142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:38.091173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.102989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fd640 00:33:44.490 [2024-07-25 05:53:38.104608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:38.104639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.114853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e5a90 00:33:44.490 [2024-07-25 05:53:38.115921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:38.115952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.127649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f57b0 00:33:44.490 [2024-07-25 05:53:38.128525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:38.128568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.140854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190de8a8 00:33:44.490 [2024-07-25 05:53:38.141937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:38.141968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.152849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190df988 00:33:44.490 [2024-07-25 05:53:38.154739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.490 [2024-07-25 05:53:38.154770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.490 [2024-07-25 05:53:38.163712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fc560 00:33:44.490 [2024-07-25 05:53:38.164627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.491 [2024-07-25 05:53:38.164658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:44.491 [2024-07-25 05:53:38.177054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e1f80 00:33:44.491 [2024-07-25 05:53:38.178095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.491 [2024-07-25 05:53:38.178126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:44.491 [2024-07-25 05:53:38.190543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f57b0 00:33:44.749 [2024-07-25 05:53:38.191811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.749 [2024-07-25 05:53:38.191842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:44.749 [2024-07-25 05:53:38.203997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f5378 00:33:44.749 [2024-07-25 05:53:38.205400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.749 [2024-07-25 05:53:38.205427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:44.749 [2024-07-25 05:53:38.217227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f0ff8 00:33:44.749 [2024-07-25 05:53:38.218761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.749 [2024-07-25 05:53:38.218791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:44.749 [2024-07-25 05:53:38.230557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e1f80 00:33:44.749 [2024-07-25 05:53:38.232262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.749 [2024-07-25 05:53:38.232309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:44.749 [2024-07-25 05:53:38.242347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fa7d8 00:33:44.749 [2024-07-25 05:53:38.243580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.749 [2024-07-25 05:53:38.243612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:44.749 [2024-07-25 05:53:38.255177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f7538 00:33:44.749 [2024-07-25 05:53:38.256259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.749 [2024-07-25 05:53:38.256309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:44.749 [2024-07-25 05:53:38.267154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190eb760 00:33:44.749 [2024-07-25 05:53:38.269406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.269450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.278217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fb8b8 00:33:44.750 [2024-07-25 05:53:38.279116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.279148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.291570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e27f0 00:33:44.750 [2024-07-25 05:53:38.292606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.292637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.305709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190eaab8 00:33:44.750 [2024-07-25 05:53:38.306951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.306983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.318187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f46d0 00:33:44.750 [2024-07-25 05:53:38.319324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.319352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.330792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f1430 00:33:44.750 [2024-07-25 05:53:38.332145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.332174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.343488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190df988 00:33:44.750 [2024-07-25 05:53:38.344560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.344591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.355519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f9b30 00:33:44.750 [2024-07-25 05:53:38.357386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.357413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.366490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e01f8 00:33:44.750 [2024-07-25 05:53:38.367442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.367469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.380622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190eff18 00:33:44.750 [2024-07-25 05:53:38.381738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.381769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.393371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fd640 00:33:44.750 [2024-07-25 05:53:38.394447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.394474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.406507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ec408 00:33:44.750 [2024-07-25 05:53:38.407404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.407432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.419783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f4b08 00:33:44.750 [2024-07-25 05:53:38.420823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.420853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.434335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f8618 00:33:44.750 [2024-07-25 05:53:38.436433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.436460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:44.750 [2024-07-25 05:53:38.443352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fef90 00:33:44.750 [2024-07-25 05:53:38.444216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.750 [2024-07-25 05:53:38.444252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:45.009 [2024-07-25 05:53:38.457060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190df550 00:33:45.009 [2024-07-25 05:53:38.458116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.009 [2024-07-25 05:53:38.458147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:45.009 [2024-07-25 05:53:38.469089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e4578 00:33:45.009 [2024-07-25 05:53:38.470126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.009 [2024-07-25 05:53:38.470156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:45.009 [2024-07-25 05:53:38.482403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ec408 00:33:45.009 [2024-07-25 05:53:38.483730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.009 [2024-07-25 05:53:38.483761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:45.009 [2024-07-25 05:53:38.495756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f5378 00:33:45.009 [2024-07-25 05:53:38.497128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.009 [2024-07-25 05:53:38.497159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:45.009 [2024-07-25 05:53:38.509041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e23b8 00:33:45.009 [2024-07-25 05:53:38.510640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.009 [2024-07-25 05:53:38.510671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:45.009 [2024-07-25 05:53:38.522385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e4578 00:33:45.009 [2024-07-25 05:53:38.524171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.524213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.535763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f9f68 00:33:45.010 [2024-07-25 05:53:38.537649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.537683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.549072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fcdd0 00:33:45.010 [2024-07-25 05:53:38.551158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.551189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.558130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f6890 00:33:45.010 [2024-07-25 05:53:38.558994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.559025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.571071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e7818 00:33:45.010 [2024-07-25 05:53:38.571952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.571984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.582841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f46d0 00:33:45.010 [2024-07-25 05:53:38.583673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.583703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.596947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fa7d8 00:33:45.010 [2024-07-25 05:53:38.598024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.598060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.609952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fd640 00:33:45.010 [2024-07-25 05:53:38.610816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.610847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.623204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e95a0 00:33:45.010 [2024-07-25 05:53:38.624253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.624299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.635101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f1430 00:33:45.010 [2024-07-25 05:53:38.636978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.637010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.646112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f3a28 00:33:45.010 [2024-07-25 05:53:38.646983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.647014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.659430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fb048 00:33:45.010 [2024-07-25 05:53:38.660469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.660497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.672718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fd640 00:33:45.010 [2024-07-25 05:53:38.673893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.673924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.686016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e27f0 00:33:45.010 [2024-07-25 05:53:38.687447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.687475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.697896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ed0b0 00:33:45.010 [2024-07-25 05:53:38.698755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.010 [2024-07-25 05:53:38.698785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:45.010 [2024-07-25 05:53:38.710893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e6300 00:33:45.268 [2024-07-25 05:53:38.711701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.268 [2024-07-25 05:53:38.711732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:45.268 [2024-07-25 05:53:38.724523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ddc00 00:33:45.268 [2024-07-25 05:53:38.725397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.268 [2024-07-25 05:53:38.725425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:45.268 [2024-07-25 05:53:38.737783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f3a28 00:33:45.268 [2024-07-25 05:53:38.738811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.268 [2024-07-25 05:53:38.738842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:45.268 [2024-07-25 05:53:38.749779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ef6a8 00:33:45.268 [2024-07-25 05:53:38.751653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.268 [2024-07-25 05:53:38.751683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:45.268 [2024-07-25 05:53:38.761590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e3498 00:33:45.268 [2024-07-25 05:53:38.762490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.268 [2024-07-25 05:53:38.762517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:45.268 [2024-07-25 05:53:38.774710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e6300 00:33:45.268 [2024-07-25 05:53:38.775795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.268 [2024-07-25 05:53:38.775837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:45.268 [2024-07-25 05:53:38.786793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ec408 00:33:45.268 [2024-07-25 05:53:38.787805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.268 [2024-07-25 05:53:38.787839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:45.268 [2024-07-25 05:53:38.800122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f4f40 00:33:45.268 [2024-07-25 05:53:38.801330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.268 [2024-07-25 05:53:38.801358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:45.268 [2024-07-25 05:53:38.813337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e7c50 00:33:45.268 [2024-07-25 05:53:38.814689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.268 [2024-07-25 05:53:38.814720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:45.268 [2024-07-25 05:53:38.826643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e9e10 00:33:45.268 [2024-07-25 05:53:38.828180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.268 [2024-07-25 05:53:38.828212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:45.268 [2024-07-25 05:53:38.838471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190ec408 00:33:45.268 [2024-07-25 05:53:38.839578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.268 [2024-07-25 05:53:38.839609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:45.268 [2024-07-25 05:53:38.851332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fd640 00:33:45.269 [2024-07-25 05:53:38.852171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.269 [2024-07-25 05:53:38.852201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:45.269 [2024-07-25 05:53:38.864529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f8a50 00:33:45.269 [2024-07-25 05:53:38.865643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.269 [2024-07-25 05:53:38.865674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:45.269 [2024-07-25 05:53:38.876513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fb480 00:33:45.269 [2024-07-25 05:53:38.878354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.269 [2024-07-25 05:53:38.878382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:45.269 [2024-07-25 05:53:38.888176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e1b48 00:33:45.269 [2024-07-25 05:53:38.889057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.269 [2024-07-25 05:53:38.889088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:45.269 [2024-07-25 05:53:38.901289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190eaab8 00:33:45.269 [2024-07-25 05:53:38.902319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.269 [2024-07-25 05:53:38.902347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:45.269 [2024-07-25 05:53:38.914676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e73e0 00:33:45.269 [2024-07-25 05:53:38.915875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.269 [2024-07-25 05:53:38.915906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:45.269 [2024-07-25 05:53:38.926743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190fd640 00:33:45.269 [2024-07-25 05:53:38.927915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.269 [2024-07-25 05:53:38.927952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:45.269 [2024-07-25 05:53:38.940046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190de470 00:33:45.269 [2024-07-25 05:53:38.941369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.269 [2024-07-25 05:53:38.941398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:45.269 [2024-07-25 05:53:38.953271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f4298 00:33:45.269 [2024-07-25 05:53:38.954841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.269 [2024-07-25 05:53:38.954883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:45.269 [2024-07-25 05:53:38.966712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190e9168 00:33:45.269 [2024-07-25 05:53:38.968497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.269 [2024-07-25 05:53:38.968525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:45.527 [2024-07-25 05:53:38.978854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe86480) with pdu=0x2000190f2d80 00:33:45.527 [2024-07-25 05:53:38.980088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.527 [2024-07-25 05:53:38.980119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:45.527 00:33:45.527 Latency(us) 00:33:45.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.527 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:45.527 nvme0n1 : 2.01 20386.52 79.63 0.00 0.00 6268.57 2451.53 15728.64 00:33:45.527 =================================================================================================================== 00:33:45.527 Total : 20386.52 79.63 0.00 0.00 6268.57 2451.53 15728.64 00:33:45.527 0 00:33:45.527 05:53:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:45.527 05:53:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:45.527 05:53:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:45.527 05:53:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:45.527 | .driver_specific 00:33:45.527 | .nvme_error 00:33:45.527 | .status_code 00:33:45.527 | .command_transient_transport_error' 00:33:45.784 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 160 > 0 )) 00:33:45.784 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1774586 00:33:45.784 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1774586 ']' 00:33:45.785 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1774586 00:33:45.785 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:45.785 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:45.785 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1774586 00:33:45.785 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:45.785 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:45.785 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1774586' 00:33:45.785 killing process with pid 1774586 00:33:45.785 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1774586 00:33:45.785 Received shutdown signal, test time was about 2.000000 seconds 00:33:45.785 00:33:45.785 Latency(us) 00:33:45.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.785 =================================================================================================================== 00:33:45.785 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:45.785 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1774586 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1775004 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1775004 /var/tmp/bperf.sock 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1775004 ']' 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:46.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:46.042 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:46.042 [2024-07-25 05:53:39.528731] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:33:46.042 [2024-07-25 05:53:39.528808] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775004 ] 00:33:46.042 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:46.042 Zero copy mechanism will not be used. 00:33:46.042 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.042 [2024-07-25 05:53:39.587987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.043 [2024-07-25 05:53:39.677580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.300 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:46.300 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:46.300 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:46.300 05:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:46.558 05:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:46.558 05:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.558 05:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:46.558 05:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.558 05:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:46.558 05:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:47.124 nvme0n1 00:33:47.124 05:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:47.124 05:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.124 05:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:47.124 05:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.124 05:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:47.124 05:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:47.124 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:47.124 Zero copy mechanism will not be used. 00:33:47.124 Running I/O for 2 seconds... 00:33:47.124 [2024-07-25 05:53:40.698999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.124 [2024-07-25 05:53:40.699418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.124 [2024-07-25 05:53:40.699470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.124 [2024-07-25 05:53:40.711739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.124 [2024-07-25 05:53:40.712122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.124 [2024-07-25 05:53:40.712156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.124 [2024-07-25 05:53:40.724497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.124 [2024-07-25 05:53:40.724794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.124 [2024-07-25 05:53:40.724827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.124 [2024-07-25 05:53:40.737852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.124 [2024-07-25 05:53:40.738048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.124 [2024-07-25 05:53:40.738081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.124 [2024-07-25 05:53:40.750521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.124 [2024-07-25 05:53:40.750923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.124 [2024-07-25 05:53:40.750957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.124 [2024-07-25 05:53:40.763288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.124 [2024-07-25 05:53:40.763676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.124 [2024-07-25 05:53:40.763724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.124 [2024-07-25 05:53:40.776683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.124 [2024-07-25 05:53:40.777021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.124 [2024-07-25 05:53:40.777051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.124 [2024-07-25 05:53:40.788321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.124 [2024-07-25 05:53:40.788658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.124 [2024-07-25 05:53:40.788702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.124 [2024-07-25 05:53:40.800587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.124 [2024-07-25 05:53:40.800750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.124 [2024-07-25 05:53:40.800780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.124 [2024-07-25 05:53:40.812988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.124 [2024-07-25 05:53:40.813365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.124 [2024-07-25 05:53:40.813409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.124 [2024-07-25 05:53:40.825164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.825545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.825575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.836796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.837127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.837169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.848710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.849094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.849137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.861080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.861444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.861473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.873238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.873600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.873628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.885158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.885535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.885580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.896630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.896904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.896932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.907904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.908378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.908408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.919355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.919781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.919810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.930091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.930470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.930500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.940899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.941335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.941365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.951866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.952318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.952352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.963071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.963526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.963570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.973734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.974182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.974210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.984270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.984699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.984742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:40.994512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:40.994916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:40.994959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:41.004720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:41.005041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:41.005070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.383 [2024-07-25 05:53:41.014883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.383 [2024-07-25 05:53:41.015282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.383 [2024-07-25 05:53:41.015312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.384 [2024-07-25 05:53:41.026057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.384 [2024-07-25 05:53:41.026447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.384 [2024-07-25 05:53:41.026477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.384 [2024-07-25 05:53:41.036832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.384 [2024-07-25 05:53:41.037293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.384 [2024-07-25 05:53:41.037323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.384 [2024-07-25 05:53:41.048155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.384 [2024-07-25 05:53:41.048546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.384 [2024-07-25 05:53:41.048593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.384 [2024-07-25 05:53:41.058745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.384 [2024-07-25 05:53:41.059182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.384 [2024-07-25 05:53:41.059211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.384 [2024-07-25 05:53:41.068630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.384 [2024-07-25 05:53:41.068969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.384 [2024-07-25 05:53:41.068999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.384 [2024-07-25 05:53:41.079192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.384 [2024-07-25 05:53:41.079652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.384 [2024-07-25 05:53:41.079681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.091385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.091707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.091739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.102342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.102758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.102787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.113454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.114008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.114036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.123833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.124219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.124271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.134687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.135081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.135129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.145280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.145646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.145693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.155209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.155686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.155714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.165357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.165780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.165809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.176878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.177295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.177325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.188217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.188603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.188646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.198792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.199264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.199301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.209689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.210129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.210182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.220267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.220699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.220732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.231270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.231655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.231690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.241042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.241437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.241468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.250900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.251289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.251319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.262333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.262687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.262731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.272554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.643 [2024-07-25 05:53:41.272896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.643 [2024-07-25 05:53:41.272926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.643 [2024-07-25 05:53:41.282398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.644 [2024-07-25 05:53:41.282808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.644 [2024-07-25 05:53:41.282837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.644 [2024-07-25 05:53:41.292549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.644 [2024-07-25 05:53:41.292881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.644 [2024-07-25 05:53:41.292910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.644 [2024-07-25 05:53:41.303622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.644 [2024-07-25 05:53:41.303997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.644 [2024-07-25 05:53:41.304026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.644 [2024-07-25 05:53:41.314914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.644 [2024-07-25 05:53:41.315202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.644 [2024-07-25 05:53:41.315232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.644 [2024-07-25 05:53:41.324906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.644 [2024-07-25 05:53:41.325254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.644 [2024-07-25 05:53:41.325286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.644 [2024-07-25 05:53:41.336711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.644 [2024-07-25 05:53:41.337054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.644 [2024-07-25 05:53:41.337083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.902 [2024-07-25 05:53:41.347567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.902 [2024-07-25 05:53:41.347922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.902 [2024-07-25 05:53:41.347952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.902 [2024-07-25 05:53:41.358974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.902 [2024-07-25 05:53:41.359421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.902 [2024-07-25 05:53:41.359451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.902 [2024-07-25 05:53:41.368944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.902 [2024-07-25 05:53:41.369386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.902 [2024-07-25 05:53:41.369416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.379913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.380290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.380333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.389744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.390026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.390055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.400385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.400783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.400810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.410775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.411235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.411290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.420928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.421309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.421355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.431823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.432122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.432151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.442721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.443073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.443103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.452840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.453248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.453278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.463601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.464022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.464067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.474810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.475138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.475169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.485421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.485853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.485882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.496183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.496653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.496683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.506173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.506632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.506665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.517397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.517856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.517884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.528490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.528868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.528898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.538663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.539178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.539207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.550617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.551089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.551132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.561896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.562320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.562349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.572726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.573173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.573201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.583464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.583836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.583865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.903 [2024-07-25 05:53:41.595141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:47.903 [2024-07-25 05:53:41.595619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.903 [2024-07-25 05:53:41.595662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.605754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.606117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.606146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.616558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.616957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.617000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.627333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.627853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.627881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.638282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.638643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.638673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.648824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.649294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.649324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.659928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.660274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.660304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.669580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.669952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.669982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.680690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.681073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.681117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.690842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.691222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.691279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.702233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.702645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.702674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.713537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.713958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.714002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.725055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.725537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.725582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.736672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.737121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.737149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.747397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.747786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.747814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.758030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.758406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.758436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.768019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.768383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.768416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.778378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.778826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.778855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.162 [2024-07-25 05:53:41.788817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.162 [2024-07-25 05:53:41.789168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.162 [2024-07-25 05:53:41.789202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.163 [2024-07-25 05:53:41.799140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.163 [2024-07-25 05:53:41.799589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.163 [2024-07-25 05:53:41.799619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.163 [2024-07-25 05:53:41.809956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.163 [2024-07-25 05:53:41.810396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.163 [2024-07-25 05:53:41.810426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.163 [2024-07-25 05:53:41.820862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.163 [2024-07-25 05:53:41.821238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.163 [2024-07-25 05:53:41.821274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.163 [2024-07-25 05:53:41.831942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.163 [2024-07-25 05:53:41.832190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.163 [2024-07-25 05:53:41.832219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.163 [2024-07-25 05:53:41.842943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.163 [2024-07-25 05:53:41.843388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.163 [2024-07-25 05:53:41.843418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.163 [2024-07-25 05:53:41.853662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.163 [2024-07-25 05:53:41.854044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.163 [2024-07-25 05:53:41.854088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:41.864781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:41.865208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:41.865237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:41.876669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:41.877054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:41.877099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:41.887422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:41.887738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:41.887768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:41.898287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:41.898732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:41.898760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:41.909834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:41.910128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:41.910159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:41.919629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:41.920048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:41.920077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:41.930711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:41.931014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:41.931044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:41.942053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:41.942449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:41.942479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:41.952952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:41.953264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:41.953296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:41.963726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:41.964038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:41.964073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:41.974635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:41.975035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:41.975088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:41.986589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:41.986935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:41.986966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:41.997670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:41.998028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:41.998060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:42.008723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:42.009076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:42.009108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:42.020458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:42.020814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:42.020858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:42.030727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:42.031063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:42.031092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:42.041255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:42.041660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:42.041695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:42.052295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:42.052544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:42.052574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:42.063367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:42.063741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:42.063771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:42.074383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:42.074784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:42.074812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:42.085206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:42.085638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:42.085666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:42.095771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:42.096163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:42.096207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:42.105989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:42.106282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:42.106313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.422 [2024-07-25 05:53:42.116109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.422 [2024-07-25 05:53:42.116470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.422 [2024-07-25 05:53:42.116500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.127197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.127391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.127422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.137502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.137800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.137829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.148413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.148755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.148805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.159951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.160378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.160413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.170584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.170908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.170938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.181612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.182021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.182050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.192215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.192585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.192616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.201368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.201658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.201688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.211773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.212153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.212196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.223307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.223716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.223744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.234127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.234621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.234659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.245232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.245573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.245603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.256213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.256564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.256600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.265809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.266154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.266184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.276628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.277026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.277070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.287474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.287874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.681 [2024-07-25 05:53:42.287902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.681 [2024-07-25 05:53:42.299128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.681 [2024-07-25 05:53:42.299489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.682 [2024-07-25 05:53:42.299518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.682 [2024-07-25 05:53:42.310005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.682 [2024-07-25 05:53:42.310380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.682 [2024-07-25 05:53:42.310410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.682 [2024-07-25 05:53:42.320598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.682 [2024-07-25 05:53:42.320965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.682 [2024-07-25 05:53:42.320994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.682 [2024-07-25 05:53:42.331549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.682 [2024-07-25 05:53:42.331883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.682 [2024-07-25 05:53:42.331916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.682 [2024-07-25 05:53:42.342736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.682 [2024-07-25 05:53:42.343161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.682 [2024-07-25 05:53:42.343189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.682 [2024-07-25 05:53:42.353679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.682 [2024-07-25 05:53:42.354087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.682 [2024-07-25 05:53:42.354130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.682 [2024-07-25 05:53:42.364504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.682 [2024-07-25 05:53:42.364876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.682 [2024-07-25 05:53:42.364921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.682 [2024-07-25 05:53:42.375215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.682 [2024-07-25 05:53:42.375640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.682 [2024-07-25 05:53:42.375670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.940 [2024-07-25 05:53:42.384811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.940 [2024-07-25 05:53:42.385136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.940 [2024-07-25 05:53:42.385166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.940 [2024-07-25 05:53:42.395453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.940 [2024-07-25 05:53:42.395768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.940 [2024-07-25 05:53:42.395801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.940 [2024-07-25 05:53:42.408683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.940 [2024-07-25 05:53:42.409188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.940 [2024-07-25 05:53:42.409216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.940 [2024-07-25 05:53:42.424302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.940 [2024-07-25 05:53:42.424773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.940 [2024-07-25 05:53:42.424802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.940 [2024-07-25 05:53:42.441680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.940 [2024-07-25 05:53:42.442059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.940 [2024-07-25 05:53:42.442089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.940 [2024-07-25 05:53:42.457508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.940 [2024-07-25 05:53:42.458014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.940 [2024-07-25 05:53:42.458045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.940 [2024-07-25 05:53:42.473650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.940 [2024-07-25 05:53:42.474201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.940 [2024-07-25 05:53:42.474230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.940 [2024-07-25 05:53:42.489868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.940 [2024-07-25 05:53:42.490491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.940 [2024-07-25 05:53:42.490522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.940 [2024-07-25 05:53:42.506058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.940 [2024-07-25 05:53:42.506396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.940 [2024-07-25 05:53:42.506429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.940 [2024-07-25 05:53:42.521972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.940 [2024-07-25 05:53:42.522516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.940 [2024-07-25 05:53:42.522561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.940 [2024-07-25 05:53:42.537073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.940 [2024-07-25 05:53:42.537692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.940 [2024-07-25 05:53:42.537725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.940 [2024-07-25 05:53:42.553404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.940 [2024-07-25 05:53:42.553883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.941 [2024-07-25 05:53:42.553913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.941 [2024-07-25 05:53:42.569463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.941 [2024-07-25 05:53:42.569889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.941 [2024-07-25 05:53:42.569918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.941 [2024-07-25 05:53:42.585752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.941 [2024-07-25 05:53:42.586277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.941 [2024-07-25 05:53:42.586306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.941 [2024-07-25 05:53:42.602573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.941 [2024-07-25 05:53:42.603069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.941 [2024-07-25 05:53:42.603110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.941 [2024-07-25 05:53:42.618674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.941 [2024-07-25 05:53:42.619194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.941 [2024-07-25 05:53:42.619247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.941 [2024-07-25 05:53:42.634600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:48.941 [2024-07-25 05:53:42.635075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.941 [2024-07-25 05:53:42.635105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.199 [2024-07-25 05:53:42.648890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:49.199 [2024-07-25 05:53:42.649289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.199 [2024-07-25 05:53:42.649335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.199 [2024-07-25 05:53:42.664844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:49.199 [2024-07-25 05:53:42.665418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.199 [2024-07-25 05:53:42.665448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.199 [2024-07-25 05:53:42.681578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe867c0) with pdu=0x2000190fef90 00:33:49.199 [2024-07-25 05:53:42.682111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.199 [2024-07-25 05:53:42.682140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.199 00:33:49.199 Latency(us) 00:33:49.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.199 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:49.199 nvme0n1 : 2.01 2697.30 337.16 0.00 0.00 5914.23 3932.16 16990.81 00:33:49.199 =================================================================================================================== 00:33:49.199 Total : 2697.30 337.16 0.00 0.00 5914.23 3932.16 16990.81 00:33:49.199 0 00:33:49.199 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:49.199 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:49.199 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:49.199 | .driver_specific 00:33:49.199 | .nvme_error 00:33:49.199 | .status_code 00:33:49.199 | .command_transient_transport_error' 00:33:49.199 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:49.457 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 174 > 0 )) 00:33:49.457 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1775004 00:33:49.457 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1775004 ']' 00:33:49.457 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1775004 00:33:49.457 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:49.457 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:49.457 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1775004 00:33:49.457 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:49.457 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:49.457 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1775004' 00:33:49.457 killing process with pid 1775004 00:33:49.457 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1775004 00:33:49.457 Received shutdown signal, test time was about 2.000000 seconds 00:33:49.457 00:33:49.457 Latency(us) 00:33:49.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.457 =================================================================================================================== 00:33:49.457 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:49.457 05:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1775004 00:33:49.714 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1773635 00:33:49.714 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1773635 ']' 00:33:49.714 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1773635 00:33:49.714 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:49.715 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:49.715 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1773635 00:33:49.715 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:49.715 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:49.715 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1773635' 00:33:49.715 killing process with pid 1773635 00:33:49.715 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1773635 00:33:49.715 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1773635 00:33:49.972 00:33:49.972 real 0m15.287s 00:33:49.972 user 0m30.664s 00:33:49.972 sys 0m4.039s 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.972 ************************************ 00:33:49.972 END TEST nvmf_digest_error 00:33:49.972 ************************************ 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:49.972 rmmod nvme_tcp 00:33:49.972 rmmod nvme_fabrics 00:33:49.972 rmmod nvme_keyring 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1773635 ']' 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1773635 00:33:49.972 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1773635 ']' 00:33:49.973 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1773635 00:33:49.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1773635) - No such process 00:33:49.973 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1773635 is not found' 00:33:49.973 Process with pid 1773635 is not found 00:33:49.973 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:49.973 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:49.973 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:49.973 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:49.973 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:49.973 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.973 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.973 05:53:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:52.501 00:33:52.501 real 0m34.953s 00:33:52.501 user 1m2.275s 00:33:52.501 sys 0m9.510s 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:52.501 ************************************ 00:33:52.501 END TEST nvmf_digest 00:33:52.501 ************************************ 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.501 ************************************ 00:33:52.501 START TEST nvmf_bdevperf 00:33:52.501 ************************************ 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:52.501 * Looking for test storage... 00:33:52.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.501 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:52.502 05:53:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.402 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.402 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:54.402 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:54.402 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:54.402 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:54.402 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:54.402 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:54.402 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:54.402 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:54.402 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:54.403 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:54.403 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:54.403 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:54.403 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:54.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:33:54.403 00:33:54.403 --- 10.0.0.2 ping statistics --- 00:33:54.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.403 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:54.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:33:54.403 00:33:54.403 --- 10.0.0.1 ping statistics --- 00:33:54.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.403 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1777350 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1777350 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1777350 ']' 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:54.403 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.404 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:54.404 05:53:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.404 [2024-07-25 05:53:47.809301] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:33:54.404 [2024-07-25 05:53:47.809390] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.404 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.404 [2024-07-25 05:53:47.874739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:54.404 [2024-07-25 05:53:47.958932] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.404 [2024-07-25 05:53:47.958987] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.404 [2024-07-25 05:53:47.959010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.404 [2024-07-25 05:53:47.959021] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.404 [2024-07-25 05:53:47.959030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.404 [2024-07-25 05:53:47.959083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:54.404 [2024-07-25 05:53:47.959145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:54.404 [2024-07-25 05:53:47.959148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.404 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:54.404 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:54.404 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:54.404 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:54.404 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.404 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.404 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:54.404 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.404 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.404 [2024-07-25 05:53:48.090494] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.662 Malloc0 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.662 [2024-07-25 05:53:48.155535] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:54.662 { 00:33:54.662 "params": { 00:33:54.662 "name": "Nvme$subsystem", 00:33:54.662 "trtype": "$TEST_TRANSPORT", 00:33:54.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.662 "adrfam": "ipv4", 00:33:54.662 "trsvcid": "$NVMF_PORT", 00:33:54.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.662 "hdgst": ${hdgst:-false}, 00:33:54.662 "ddgst": ${ddgst:-false} 00:33:54.662 }, 00:33:54.662 "method": "bdev_nvme_attach_controller" 00:33:54.662 } 00:33:54.662 EOF 00:33:54.662 )") 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:54.662 05:53:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:54.662 "params": { 00:33:54.662 "name": "Nvme1", 00:33:54.662 "trtype": "tcp", 00:33:54.662 "traddr": "10.0.0.2", 00:33:54.662 "adrfam": "ipv4", 00:33:54.662 "trsvcid": "4420", 00:33:54.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:54.662 "hdgst": false, 00:33:54.662 "ddgst": false 00:33:54.662 }, 00:33:54.662 "method": "bdev_nvme_attach_controller" 00:33:54.662 }' 00:33:54.662 [2024-07-25 05:53:48.202172] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:33:54.662 [2024-07-25 05:53:48.202274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777491 ] 00:33:54.662 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.662 [2024-07-25 05:53:48.260590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.662 [2024-07-25 05:53:48.350308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.920 Running I/O for 1 seconds... 00:33:56.293 00:33:56.293 Latency(us) 00:33:56.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.293 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:56.293 Verification LBA range: start 0x0 length 0x4000 00:33:56.293 Nvme1n1 : 1.01 7361.76 28.76 0.00 0.00 17303.41 3301.07 20583.16 00:33:56.293 =================================================================================================================== 00:33:56.293 Total : 7361.76 28.76 0.00 0.00 17303.41 3301.07 20583.16 00:33:56.293 05:53:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1777637 00:33:56.293 05:53:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:56.293 05:53:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:56.293 05:53:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:56.293 05:53:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:56.293 05:53:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:56.293 05:53:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:56.293 05:53:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:56.293 { 00:33:56.293 "params": { 00:33:56.293 "name": "Nvme$subsystem", 00:33:56.293 "trtype": "$TEST_TRANSPORT", 00:33:56.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.293 "adrfam": "ipv4", 00:33:56.293 "trsvcid": "$NVMF_PORT", 00:33:56.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.293 "hdgst": ${hdgst:-false}, 00:33:56.293 "ddgst": ${ddgst:-false} 00:33:56.293 }, 00:33:56.293 "method": "bdev_nvme_attach_controller" 00:33:56.293 } 00:33:56.293 EOF 00:33:56.293 )") 00:33:56.293 05:53:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:56.293 05:53:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:56.293 05:53:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:56.293 05:53:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:56.293 "params": { 00:33:56.293 "name": "Nvme1", 00:33:56.293 "trtype": "tcp", 00:33:56.293 "traddr": "10.0.0.2", 00:33:56.293 "adrfam": "ipv4", 00:33:56.293 "trsvcid": "4420", 00:33:56.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:56.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:56.293 "hdgst": false, 00:33:56.293 "ddgst": false 00:33:56.293 }, 00:33:56.293 "method": "bdev_nvme_attach_controller" 00:33:56.293 }' 00:33:56.293 [2024-07-25 05:53:49.838961] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:33:56.293 [2024-07-25 05:53:49.839059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777637 ] 00:33:56.293 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.293 [2024-07-25 05:53:49.899072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.293 [2024-07-25 05:53:49.982759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.551 Running I/O for 15 seconds... 00:33:59.832 05:53:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1777350 00:33:59.832 05:53:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:59.832 [2024-07-25 05:53:52.812221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.832 [2024-07-25 05:53:52.812461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.832 [2024-07-25 05:53:52.812496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.832 [2024-07-25 05:53:52.812551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.832 [2024-07-25 05:53:52.812588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.812968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.812985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.832 [2024-07-25 05:53:52.813760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.832 [2024-07-25 05:53:52.813793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.832 [2024-07-25 05:53:52.813811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.832 [2024-07-25 05:53:52.813828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.813846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.813861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.813879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.813895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.813912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.813928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.813945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.813961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.813978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.813994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.814027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.814604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.814638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.814678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.814710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.814743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.814776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.814809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.814843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.814876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.814909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.814942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.814975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.814992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.833 [2024-07-25 05:53:52.815595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.815630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.815662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.815695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.815729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.815763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.815802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.815835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.815867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.815901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.815939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.815974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.815991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.816007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.816024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.816040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.816057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.816073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.816090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.816107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.816124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.816140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.816157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.816173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.816190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.816205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.816223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.816238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.816264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.816297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.816313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.816328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.816348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.816363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.816383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.816397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.816413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.833 [2024-07-25 05:53:52.816428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.833 [2024-07-25 05:53:52.816443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.834 [2024-07-25 05:53:52.816457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.834 [2024-07-25 05:53:52.816473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.834 [2024-07-25 05:53:52.816488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.834 [2024-07-25 05:53:52.816511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.834 [2024-07-25 05:53:52.816545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.834 [2024-07-25 05:53:52.816562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.834 [2024-07-25 05:53:52.816578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.834 [2024-07-25 05:53:52.816595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.834 [2024-07-25 05:53:52.816611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.834 [2024-07-25 05:53:52.816628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.834 [2024-07-25 05:53:52.816644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.834 [2024-07-25 05:53:52.816661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.834 [2024-07-25 05:53:52.816677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.834 [2024-07-25 05:53:52.816694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.834 [2024-07-25 05:53:52.816710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.834 [2024-07-25 05:53:52.816727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.834 [2024-07-25 05:53:52.816743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.834 [2024-07-25 05:53:52.816760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.834 [2024-07-25 05:53:52.816776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.834 [2024-07-25 05:53:52.816794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.834 [2024-07-25 05:53:52.816814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.834 [2024-07-25 05:53:52.816831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf240 is same with the state(5) to be set 00:33:59.834 [2024-07-25 05:53:52.816852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:59.834 [2024-07-25 05:53:52.816865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:59.834 [2024-07-25 05:53:52.816879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28408 len:8 PRP1 0x0 PRP2 0x0 00:33:59.834 [2024-07-25 05:53:52.816898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.834 [2024-07-25 05:53:52.816970] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfbf240 was disconnected and freed. reset controller. 00:33:59.834 [2024-07-25 05:53:52.820870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.834 [2024-07-25 05:53:52.820948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.834 [2024-07-25 05:53:52.821695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.834 [2024-07-25 05:53:52.821724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.834 [2024-07-25 05:53:52.821740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.834 [2024-07-25 05:53:52.821986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.834 [2024-07-25 05:53:52.822231] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.834 [2024-07-25 05:53:52.822265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.834 [2024-07-25 05:53:52.822300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.834 [2024-07-25 05:53:52.825916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.834 [2024-07-25 05:53:52.835056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.834 [2024-07-25 05:53:52.835488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.834 [2024-07-25 05:53:52.835521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.834 [2024-07-25 05:53:52.835539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.834 [2024-07-25 05:53:52.835780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.834 [2024-07-25 05:53:52.836023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.834 [2024-07-25 05:53:52.836048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.834 [2024-07-25 05:53:52.836064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.834 [2024-07-25 05:53:52.839662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.834 [2024-07-25 05:53:52.848981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.834 [2024-07-25 05:53:52.849415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.834 [2024-07-25 05:53:52.849449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.834 [2024-07-25 05:53:52.849467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.834 [2024-07-25 05:53:52.849713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.834 [2024-07-25 05:53:52.849956] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.834 [2024-07-25 05:53:52.849981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.834 [2024-07-25 05:53:52.849997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.834 [2024-07-25 05:53:52.853601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.834 [2024-07-25 05:53:52.862926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.834 [2024-07-25 05:53:52.863370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.834 [2024-07-25 05:53:52.863405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.834 [2024-07-25 05:53:52.863424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.834 [2024-07-25 05:53:52.863665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.834 [2024-07-25 05:53:52.863908] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.834 [2024-07-25 05:53:52.863934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.834 [2024-07-25 05:53:52.863950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.834 [2024-07-25 05:53:52.867565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.834 [2024-07-25 05:53:52.876886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.834 [2024-07-25 05:53:52.877328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.834 [2024-07-25 05:53:52.877360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.834 [2024-07-25 05:53:52.877378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.834 [2024-07-25 05:53:52.877619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.834 [2024-07-25 05:53:52.877862] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.834 [2024-07-25 05:53:52.877888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.834 [2024-07-25 05:53:52.877905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.834 [2024-07-25 05:53:52.881500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.834 [2024-07-25 05:53:52.890809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.834 [2024-07-25 05:53:52.891259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.834 [2024-07-25 05:53:52.891291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.834 [2024-07-25 05:53:52.891309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.834 [2024-07-25 05:53:52.891549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.834 [2024-07-25 05:53:52.891793] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.834 [2024-07-25 05:53:52.891818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.834 [2024-07-25 05:53:52.891839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.834 [2024-07-25 05:53:52.895440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.834 [2024-07-25 05:53:52.904756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.834 [2024-07-25 05:53:52.905216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.834 [2024-07-25 05:53:52.905268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.834 [2024-07-25 05:53:52.905286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.834 [2024-07-25 05:53:52.905545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.834 [2024-07-25 05:53:52.905789] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.834 [2024-07-25 05:53:52.905814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.834 [2024-07-25 05:53:52.905831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.834 [2024-07-25 05:53:52.909428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.834 [2024-07-25 05:53:52.918739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.834 [2024-07-25 05:53:52.919175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.834 [2024-07-25 05:53:52.919208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.834 [2024-07-25 05:53:52.919226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.834 [2024-07-25 05:53:52.919478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.834 [2024-07-25 05:53:52.919731] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.834 [2024-07-25 05:53:52.919756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.834 [2024-07-25 05:53:52.919772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.834 [2024-07-25 05:53:52.923367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.834 [2024-07-25 05:53:52.932680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.834 [2024-07-25 05:53:52.933114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.834 [2024-07-25 05:53:52.933141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.834 [2024-07-25 05:53:52.933156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.834 [2024-07-25 05:53:52.933423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.834 [2024-07-25 05:53:52.933668] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.834 [2024-07-25 05:53:52.933694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.834 [2024-07-25 05:53:52.933710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.834 [2024-07-25 05:53:52.937299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.834 [2024-07-25 05:53:52.946607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.834 [2024-07-25 05:53:52.947042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.834 [2024-07-25 05:53:52.947074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.834 [2024-07-25 05:53:52.947092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.834 [2024-07-25 05:53:52.947350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.834 [2024-07-25 05:53:52.947595] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.834 [2024-07-25 05:53:52.947620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.834 [2024-07-25 05:53:52.947636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.834 [2024-07-25 05:53:52.951223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.834 [2024-07-25 05:53:52.960556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.834 [2024-07-25 05:53:52.960990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.834 [2024-07-25 05:53:52.961023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.834 [2024-07-25 05:53:52.961042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.834 [2024-07-25 05:53:52.961297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.834 [2024-07-25 05:53:52.961540] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.834 [2024-07-25 05:53:52.961566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.834 [2024-07-25 05:53:52.961582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.834 [2024-07-25 05:53:52.965169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.834 [2024-07-25 05:53:52.974510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.834 [2024-07-25 05:53:52.974957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.834 [2024-07-25 05:53:52.974989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.834 [2024-07-25 05:53:52.975007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.834 [2024-07-25 05:53:52.975261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.834 [2024-07-25 05:53:52.975505] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.834 [2024-07-25 05:53:52.975530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.834 [2024-07-25 05:53:52.975546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.835 [2024-07-25 05:53:52.979134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.835 [2024-07-25 05:53:52.988453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.835 [2024-07-25 05:53:52.988894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.835 [2024-07-25 05:53:52.988927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.835 [2024-07-25 05:53:52.988944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.835 [2024-07-25 05:53:52.989184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.835 [2024-07-25 05:53:52.989450] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.835 [2024-07-25 05:53:52.989477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.835 [2024-07-25 05:53:52.989493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.835 [2024-07-25 05:53:52.993083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.835 [2024-07-25 05:53:53.002409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.835 [2024-07-25 05:53:53.002841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.835 [2024-07-25 05:53:53.002872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.835 [2024-07-25 05:53:53.002890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.835 [2024-07-25 05:53:53.003130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.835 [2024-07-25 05:53:53.003388] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.835 [2024-07-25 05:53:53.003414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.835 [2024-07-25 05:53:53.003431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.835 [2024-07-25 05:53:53.007014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.835 [2024-07-25 05:53:53.016332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.835 [2024-07-25 05:53:53.016735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.835 [2024-07-25 05:53:53.016767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.835 [2024-07-25 05:53:53.016785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.835 [2024-07-25 05:53:53.017025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.835 [2024-07-25 05:53:53.017283] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.835 [2024-07-25 05:53:53.017310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.835 [2024-07-25 05:53:53.017326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.835 [2024-07-25 05:53:53.020913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.835 [2024-07-25 05:53:53.030227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.835 [2024-07-25 05:53:53.030647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.835 [2024-07-25 05:53:53.030679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.835 [2024-07-25 05:53:53.030697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.835 [2024-07-25 05:53:53.030937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.835 [2024-07-25 05:53:53.031180] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.835 [2024-07-25 05:53:53.031205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.835 [2024-07-25 05:53:53.031221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.835 [2024-07-25 05:53:53.034825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.835 [2024-07-25 05:53:53.044134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.835 [2024-07-25 05:53:53.044568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.835 [2024-07-25 05:53:53.044601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.835 [2024-07-25 05:53:53.044619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.835 [2024-07-25 05:53:53.044860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.835 [2024-07-25 05:53:53.045105] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.835 [2024-07-25 05:53:53.045130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.835 [2024-07-25 05:53:53.045147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.835 [2024-07-25 05:53:53.048751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.835 [2024-07-25 05:53:53.058072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.835 [2024-07-25 05:53:53.058511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.835 [2024-07-25 05:53:53.058544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.835 [2024-07-25 05:53:53.058563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.835 [2024-07-25 05:53:53.058803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.835 [2024-07-25 05:53:53.059049] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.835 [2024-07-25 05:53:53.059075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.835 [2024-07-25 05:53:53.059091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.835 [2024-07-25 05:53:53.062688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.835 [2024-07-25 05:53:53.071998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.835 [2024-07-25 05:53:53.072393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.835 [2024-07-25 05:53:53.072426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.835 [2024-07-25 05:53:53.072445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.835 [2024-07-25 05:53:53.072685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.835 [2024-07-25 05:53:53.072930] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.835 [2024-07-25 05:53:53.072955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.835 [2024-07-25 05:53:53.072971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.835 [2024-07-25 05:53:53.076561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.835 [2024-07-25 05:53:53.085860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.835 [2024-07-25 05:53:53.086275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.835 [2024-07-25 05:53:53.086308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.835 [2024-07-25 05:53:53.086332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.835 [2024-07-25 05:53:53.086574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.835 [2024-07-25 05:53:53.086818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.835 [2024-07-25 05:53:53.086843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.835 [2024-07-25 05:53:53.086859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.835 [2024-07-25 05:53:53.090451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.835 [2024-07-25 05:53:53.099761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.835 [2024-07-25 05:53:53.100200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.835 [2024-07-25 05:53:53.100232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.835 [2024-07-25 05:53:53.100263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.835 [2024-07-25 05:53:53.100505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.835 [2024-07-25 05:53:53.100748] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.835 [2024-07-25 05:53:53.100774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.835 [2024-07-25 05:53:53.100790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.835 [2024-07-25 05:53:53.104382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.835 [2024-07-25 05:53:53.113687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.835 [2024-07-25 05:53:53.114122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.835 [2024-07-25 05:53:53.114154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.835 [2024-07-25 05:53:53.114172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.835 [2024-07-25 05:53:53.114428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.835 [2024-07-25 05:53:53.114672] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.835 [2024-07-25 05:53:53.114698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.835 [2024-07-25 05:53:53.114714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.835 [2024-07-25 05:53:53.118310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.835 [2024-07-25 05:53:53.127620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.835 [2024-07-25 05:53:53.128061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.835 [2024-07-25 05:53:53.128094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.835 [2024-07-25 05:53:53.128112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.835 [2024-07-25 05:53:53.128367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.835 [2024-07-25 05:53:53.128613] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.835 [2024-07-25 05:53:53.128643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.835 [2024-07-25 05:53:53.128661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.835 [2024-07-25 05:53:53.132261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.835 [2024-07-25 05:53:53.141588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.835 [2024-07-25 05:53:53.142035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.835 [2024-07-25 05:53:53.142062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.835 [2024-07-25 05:53:53.142078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.835 [2024-07-25 05:53:53.142343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.835 [2024-07-25 05:53:53.142588] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.835 [2024-07-25 05:53:53.142613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.835 [2024-07-25 05:53:53.142629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.835 [2024-07-25 05:53:53.146215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.835 [2024-07-25 05:53:53.155541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.835 [2024-07-25 05:53:53.155972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.835 [2024-07-25 05:53:53.156004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.835 [2024-07-25 05:53:53.156022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.835 [2024-07-25 05:53:53.156273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.835 [2024-07-25 05:53:53.156518] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.156542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.156558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.160156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.169480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.169907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.169938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.169956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.170196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.170451] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.170476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.170492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.174077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.183433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.183864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.183896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.183915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.184156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.184414] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.184440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.184457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.188044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.197397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.197829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.197863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.197881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.198122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.198381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.198407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.198423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.202012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.211350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.211789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.211821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.211839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.212079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.212334] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.212359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.212375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.215954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.225256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.225686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.225718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.225736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.225981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.226226] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.226259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.226277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.229859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.239158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.239597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.239629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.239647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.239886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.240130] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.240155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.240170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.243763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.253089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.253511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.253549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.253568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.253808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.254052] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.254077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.254093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.257684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.267031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.267497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.267526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.267543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.267809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.268053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.268078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.268100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.271700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.281021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.281479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.281511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.281530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.281770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.282012] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.282038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.282054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.285655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.294977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.295417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.295449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.295467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.295707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.295950] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.295975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.295991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.299591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.308931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.309370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.309403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.309421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.309662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.309905] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.309931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.309947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.313550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.322874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.323300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.323328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.323345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.323586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.323830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.323854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.323870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.327457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.336765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.337174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.337207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.337225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.337476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.337729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.337755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.337771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.341359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.350676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.836 [2024-07-25 05:53:53.351084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.836 [2024-07-25 05:53:53.351118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.836 [2024-07-25 05:53:53.351137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.836 [2024-07-25 05:53:53.351393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.836 [2024-07-25 05:53:53.351640] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.836 [2024-07-25 05:53:53.351665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.836 [2024-07-25 05:53:53.351681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.836 [2024-07-25 05:53:53.355277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.836 [2024-07-25 05:53:53.364601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.837 [2024-07-25 05:53:53.365047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.837 [2024-07-25 05:53:53.365079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.837 [2024-07-25 05:53:53.365097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.837 [2024-07-25 05:53:53.365352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.837 [2024-07-25 05:53:53.365601] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.837 [2024-07-25 05:53:53.365628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.837 [2024-07-25 05:53:53.365644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.837 [2024-07-25 05:53:53.369228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.837 [2024-07-25 05:53:53.378512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.837 [2024-07-25 05:53:53.378923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.837 [2024-07-25 05:53:53.378956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.837 [2024-07-25 05:53:53.378974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.837 [2024-07-25 05:53:53.379215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.837 [2024-07-25 05:53:53.379474] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.837 [2024-07-25 05:53:53.379500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.837 [2024-07-25 05:53:53.379517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.837 [2024-07-25 05:53:53.383104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.837 [2024-07-25 05:53:53.392444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.837 [2024-07-25 05:53:53.392874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.837 [2024-07-25 05:53:53.392907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.837 [2024-07-25 05:53:53.392925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.837 [2024-07-25 05:53:53.393165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.837 [2024-07-25 05:53:53.393422] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.837 [2024-07-25 05:53:53.393448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.837 [2024-07-25 05:53:53.393465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.837 [2024-07-25 05:53:53.397052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.837 [2024-07-25 05:53:53.406371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.837 [2024-07-25 05:53:53.406799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.837 [2024-07-25 05:53:53.406831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.837 [2024-07-25 05:53:53.406849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.837 [2024-07-25 05:53:53.407088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.837 [2024-07-25 05:53:53.407345] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.837 [2024-07-25 05:53:53.407371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.837 [2024-07-25 05:53:53.407388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.837 [2024-07-25 05:53:53.410978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.837 [2024-07-25 05:53:53.420304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.837 [2024-07-25 05:53:53.420741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.837 [2024-07-25 05:53:53.420772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.837 [2024-07-25 05:53:53.420790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.837 [2024-07-25 05:53:53.421029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.837 [2024-07-25 05:53:53.421286] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.837 [2024-07-25 05:53:53.421312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.837 [2024-07-25 05:53:53.421328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.837 [2024-07-25 05:53:53.424913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.837 [2024-07-25 05:53:53.434227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.837 [2024-07-25 05:53:53.434672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.837 [2024-07-25 05:53:53.434699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.837 [2024-07-25 05:53:53.434715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.837 [2024-07-25 05:53:53.434947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.837 [2024-07-25 05:53:53.435205] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.837 [2024-07-25 05:53:53.435231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.837 [2024-07-25 05:53:53.435263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.837 [2024-07-25 05:53:53.438897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.837 [2024-07-25 05:53:53.448223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.837 [2024-07-25 05:53:53.448670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.837 [2024-07-25 05:53:53.448703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.837 [2024-07-25 05:53:53.448722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.837 [2024-07-25 05:53:53.448962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.837 [2024-07-25 05:53:53.449206] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.837 [2024-07-25 05:53:53.449231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.837 [2024-07-25 05:53:53.449256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.837 [2024-07-25 05:53:53.452853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.837 [2024-07-25 05:53:53.462045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.837 [2024-07-25 05:53:53.462532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.837 [2024-07-25 05:53:53.462562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.837 [2024-07-25 05:53:53.462597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.837 [2024-07-25 05:53:53.462839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.837 [2024-07-25 05:53:53.463083] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.837 [2024-07-25 05:53:53.463108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.837 [2024-07-25 05:53:53.463123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.837 [2024-07-25 05:53:53.466402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.837 [2024-07-25 05:53:53.475416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.837 [2024-07-25 05:53:53.475858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.837 [2024-07-25 05:53:53.475888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.837 [2024-07-25 05:53:53.475905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.837 [2024-07-25 05:53:53.476156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.837 [2024-07-25 05:53:53.476383] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.837 [2024-07-25 05:53:53.476405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.837 [2024-07-25 05:53:53.476419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.837 [2024-07-25 05:53:53.479459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.837 [2024-07-25 05:53:53.488792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.837 [2024-07-25 05:53:53.489252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.837 [2024-07-25 05:53:53.489281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.837 [2024-07-25 05:53:53.489297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.837 [2024-07-25 05:53:53.489553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.837 [2024-07-25 05:53:53.489747] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.837 [2024-07-25 05:53:53.489768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.837 [2024-07-25 05:53:53.489781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.837 [2024-07-25 05:53:53.492752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.837 [2024-07-25 05:53:53.502069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.837 [2024-07-25 05:53:53.502532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.837 [2024-07-25 05:53:53.502576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.837 [2024-07-25 05:53:53.502592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.837 [2024-07-25 05:53:53.502833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.837 [2024-07-25 05:53:53.503043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.837 [2024-07-25 05:53:53.503068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.837 [2024-07-25 05:53:53.503082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.837 [2024-07-25 05:53:53.506051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.837 [2024-07-25 05:53:53.515323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.837 [2024-07-25 05:53:53.515742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.837 [2024-07-25 05:53:53.515771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.837 [2024-07-25 05:53:53.515787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.837 [2024-07-25 05:53:53.516040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.837 [2024-07-25 05:53:53.516234] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.837 [2024-07-25 05:53:53.516279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.837 [2024-07-25 05:53:53.516294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.837 [2024-07-25 05:53:53.519258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.837 [2024-07-25 05:53:53.528582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.837 [2024-07-25 05:53:53.529101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.837 [2024-07-25 05:53:53.529133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:33:59.837 [2024-07-25 05:53:53.529150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:33:59.838 [2024-07-25 05:53:53.529393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:33:59.838 [2024-07-25 05:53:53.529635] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.838 [2024-07-25 05:53:53.529656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.838 [2024-07-25 05:53:53.529670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.095 [2024-07-25 05:53:53.533375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.095 [2024-07-25 05:53:53.542238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.095 [2024-07-25 05:53:53.542646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.095 [2024-07-25 05:53:53.542678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.095 [2024-07-25 05:53:53.542695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.095 [2024-07-25 05:53:53.542934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.095 [2024-07-25 05:53:53.543145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.095 [2024-07-25 05:53:53.543166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.095 [2024-07-25 05:53:53.543180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.095 [2024-07-25 05:53:53.546154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.095 [2024-07-25 05:53:53.555434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.095 [2024-07-25 05:53:53.555844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.095 [2024-07-25 05:53:53.555874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.095 [2024-07-25 05:53:53.555891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.095 [2024-07-25 05:53:53.556145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.095 [2024-07-25 05:53:53.556370] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.095 [2024-07-25 05:53:53.556392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.095 [2024-07-25 05:53:53.556406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.095 [2024-07-25 05:53:53.559385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.095 [2024-07-25 05:53:53.568670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.095 [2024-07-25 05:53:53.569062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.095 [2024-07-25 05:53:53.569092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.095 [2024-07-25 05:53:53.569110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.095 [2024-07-25 05:53:53.569368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.095 [2024-07-25 05:53:53.569590] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.095 [2024-07-25 05:53:53.569612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.095 [2024-07-25 05:53:53.569642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.095 [2024-07-25 05:53:53.573132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.095 [2024-07-25 05:53:53.582003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.095 [2024-07-25 05:53:53.582397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.095 [2024-07-25 05:53:53.582426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.095 [2024-07-25 05:53:53.582442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.095 [2024-07-25 05:53:53.582664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.095 [2024-07-25 05:53:53.582873] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.095 [2024-07-25 05:53:53.582894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.095 [2024-07-25 05:53:53.582907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.095 [2024-07-25 05:53:53.585940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.095 [2024-07-25 05:53:53.595335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.095 [2024-07-25 05:53:53.595819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.095 [2024-07-25 05:53:53.595848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.095 [2024-07-25 05:53:53.595870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.095 [2024-07-25 05:53:53.596125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.095 [2024-07-25 05:53:53.596349] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.095 [2024-07-25 05:53:53.596371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.095 [2024-07-25 05:53:53.596384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.095 [2024-07-25 05:53:53.599349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.095 [2024-07-25 05:53:53.608661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.095 [2024-07-25 05:53:53.609048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.095 [2024-07-25 05:53:53.609075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.095 [2024-07-25 05:53:53.609091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.095 [2024-07-25 05:53:53.609337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.095 [2024-07-25 05:53:53.609553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.095 [2024-07-25 05:53:53.609574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.095 [2024-07-25 05:53:53.609587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.095 [2024-07-25 05:53:53.612532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.095 [2024-07-25 05:53:53.621958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.095 [2024-07-25 05:53:53.622353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.095 [2024-07-25 05:53:53.622382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.095 [2024-07-25 05:53:53.622399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.095 [2024-07-25 05:53:53.622654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.095 [2024-07-25 05:53:53.622847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.095 [2024-07-25 05:53:53.622868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.095 [2024-07-25 05:53:53.622880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.095 [2024-07-25 05:53:53.625845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.095 [2024-07-25 05:53:53.635279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.095 [2024-07-25 05:53:53.635644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.095 [2024-07-25 05:53:53.635672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.095 [2024-07-25 05:53:53.635687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.095 [2024-07-25 05:53:53.635903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.095 [2024-07-25 05:53:53.636113] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.096 [2024-07-25 05:53:53.636134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.096 [2024-07-25 05:53:53.636151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.096 [2024-07-25 05:53:53.639206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.096 [2024-07-25 05:53:53.648556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.096 [2024-07-25 05:53:53.648950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.096 [2024-07-25 05:53:53.648980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.096 [2024-07-25 05:53:53.648997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.096 [2024-07-25 05:53:53.649238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.096 [2024-07-25 05:53:53.649448] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.096 [2024-07-25 05:53:53.649469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.096 [2024-07-25 05:53:53.649483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.096 [2024-07-25 05:53:53.652444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.096 [2024-07-25 05:53:53.661876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.096 [2024-07-25 05:53:53.662333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.096 [2024-07-25 05:53:53.662363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.096 [2024-07-25 05:53:53.662379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.096 [2024-07-25 05:53:53.662636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.096 [2024-07-25 05:53:53.662830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.096 [2024-07-25 05:53:53.662851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.096 [2024-07-25 05:53:53.662863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.096 [2024-07-25 05:53:53.665828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.096 [2024-07-25 05:53:53.675079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.096 [2024-07-25 05:53:53.675495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.096 [2024-07-25 05:53:53.675524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.096 [2024-07-25 05:53:53.675540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.096 [2024-07-25 05:53:53.675790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.096 [2024-07-25 05:53:53.675984] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.096 [2024-07-25 05:53:53.676004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.096 [2024-07-25 05:53:53.676018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.096 [2024-07-25 05:53:53.678983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.096 [2024-07-25 05:53:53.688420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.096 [2024-07-25 05:53:53.688800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.096 [2024-07-25 05:53:53.688827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.096 [2024-07-25 05:53:53.688844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.096 [2024-07-25 05:53:53.689082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.096 [2024-07-25 05:53:53.689311] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.096 [2024-07-25 05:53:53.689334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.096 [2024-07-25 05:53:53.689348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.096 [2024-07-25 05:53:53.692324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.096 [2024-07-25 05:53:53.701636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.096 [2024-07-25 05:53:53.702042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.096 [2024-07-25 05:53:53.702072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.096 [2024-07-25 05:53:53.702089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.096 [2024-07-25 05:53:53.702344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.096 [2024-07-25 05:53:53.702574] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.096 [2024-07-25 05:53:53.702595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.096 [2024-07-25 05:53:53.702608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.096 [2024-07-25 05:53:53.705587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.096 [2024-07-25 05:53:53.714847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.096 [2024-07-25 05:53:53.715263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.096 [2024-07-25 05:53:53.715292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.096 [2024-07-25 05:53:53.715308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.096 [2024-07-25 05:53:53.715545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.096 [2024-07-25 05:53:53.715740] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.096 [2024-07-25 05:53:53.715760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.096 [2024-07-25 05:53:53.715773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.096 [2024-07-25 05:53:53.718740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.096 [2024-07-25 05:53:53.728170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.096 [2024-07-25 05:53:53.728657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.096 [2024-07-25 05:53:53.728686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.096 [2024-07-25 05:53:53.728703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.096 [2024-07-25 05:53:53.728949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.096 [2024-07-25 05:53:53.729155] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.096 [2024-07-25 05:53:53.729177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.096 [2024-07-25 05:53:53.729191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.096 [2024-07-25 05:53:53.732169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.096 [2024-07-25 05:53:53.741673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.096 [2024-07-25 05:53:53.742123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.096 [2024-07-25 05:53:53.742151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.096 [2024-07-25 05:53:53.742166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.096 [2024-07-25 05:53:53.742434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.096 [2024-07-25 05:53:53.742650] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.096 [2024-07-25 05:53:53.742670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.096 [2024-07-25 05:53:53.742683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.096 [2024-07-25 05:53:53.745648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.096 [2024-07-25 05:53:53.754959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.096 [2024-07-25 05:53:53.755312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.096 [2024-07-25 05:53:53.755340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.096 [2024-07-25 05:53:53.755357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.096 [2024-07-25 05:53:53.755567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.096 [2024-07-25 05:53:53.755777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.096 [2024-07-25 05:53:53.755797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.097 [2024-07-25 05:53:53.755810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.097 [2024-07-25 05:53:53.758780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.097 [2024-07-25 05:53:53.768216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.097 [2024-07-25 05:53:53.768616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.097 [2024-07-25 05:53:53.768644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.097 [2024-07-25 05:53:53.768660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.097 [2024-07-25 05:53:53.768898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.097 [2024-07-25 05:53:53.769119] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.097 [2024-07-25 05:53:53.769140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.097 [2024-07-25 05:53:53.769153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.097 [2024-07-25 05:53:53.772234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.097 [2024-07-25 05:53:53.781503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.097 [2024-07-25 05:53:53.781922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.097 [2024-07-25 05:53:53.781951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.097 [2024-07-25 05:53:53.781967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.097 [2024-07-25 05:53:53.782222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.097 [2024-07-25 05:53:53.782444] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.097 [2024-07-25 05:53:53.782466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.097 [2024-07-25 05:53:53.782479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.097 [2024-07-25 05:53:53.785446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.097 [2024-07-25 05:53:53.795258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.097 [2024-07-25 05:53:53.795786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.097 [2024-07-25 05:53:53.795817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.097 [2024-07-25 05:53:53.795834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.097 [2024-07-25 05:53:53.796063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.356 [2024-07-25 05:53:53.796402] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.356 [2024-07-25 05:53:53.796448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.356 [2024-07-25 05:53:53.796470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.356 [2024-07-25 05:53:53.799619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.356 [2024-07-25 05:53:53.808617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.356 [2024-07-25 05:53:53.808991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.356 [2024-07-25 05:53:53.809020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.356 [2024-07-25 05:53:53.809036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.356 [2024-07-25 05:53:53.809282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.356 [2024-07-25 05:53:53.809484] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.356 [2024-07-25 05:53:53.809505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.356 [2024-07-25 05:53:53.809533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.356 [2024-07-25 05:53:53.812489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.356 [2024-07-25 05:53:53.821936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.356 [2024-07-25 05:53:53.822371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.356 [2024-07-25 05:53:53.822406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.356 [2024-07-25 05:53:53.822423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.356 [2024-07-25 05:53:53.822666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.356 [2024-07-25 05:53:53.822929] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.356 [2024-07-25 05:53:53.822963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.356 [2024-07-25 05:53:53.822979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.356 [2024-07-25 05:53:53.826376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.356 [2024-07-25 05:53:53.835402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.356 [2024-07-25 05:53:53.835843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.356 [2024-07-25 05:53:53.835873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.356 [2024-07-25 05:53:53.835889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.356 [2024-07-25 05:53:53.836142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.356 [2024-07-25 05:53:53.836368] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.356 [2024-07-25 05:53:53.836391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.356 [2024-07-25 05:53:53.836405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.356 [2024-07-25 05:53:53.839685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.356 [2024-07-25 05:53:53.848710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.356 [2024-07-25 05:53:53.849067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.356 [2024-07-25 05:53:53.849096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.356 [2024-07-25 05:53:53.849112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.356 [2024-07-25 05:53:53.849367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.356 [2024-07-25 05:53:53.849609] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.356 [2024-07-25 05:53:53.849630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.356 [2024-07-25 05:53:53.849643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.356 [2024-07-25 05:53:53.852662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.356 [2024-07-25 05:53:53.861933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.356 [2024-07-25 05:53:53.862327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.356 [2024-07-25 05:53:53.862357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.356 [2024-07-25 05:53:53.862373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.356 [2024-07-25 05:53:53.862630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.356 [2024-07-25 05:53:53.862830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.356 [2024-07-25 05:53:53.862851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.356 [2024-07-25 05:53:53.862864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.356 [2024-07-25 05:53:53.865828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.356 [2024-07-25 05:53:53.875272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.356 [2024-07-25 05:53:53.875670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.356 [2024-07-25 05:53:53.875700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.356 [2024-07-25 05:53:53.875716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.356 [2024-07-25 05:53:53.875971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.356 [2024-07-25 05:53:53.876167] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.356 [2024-07-25 05:53:53.876187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.356 [2024-07-25 05:53:53.876200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.356 [2024-07-25 05:53:53.879163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.356 [2024-07-25 05:53:53.888628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.356 [2024-07-25 05:53:53.889016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.356 [2024-07-25 05:53:53.889045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.356 [2024-07-25 05:53:53.889062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.357 [2024-07-25 05:53:53.889309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.357 [2024-07-25 05:53:53.889529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.357 [2024-07-25 05:53:53.889565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.357 [2024-07-25 05:53:53.889579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.357 [2024-07-25 05:53:53.892562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.357 [2024-07-25 05:53:53.901808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.357 [2024-07-25 05:53:53.902177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.357 [2024-07-25 05:53:53.902221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.357 [2024-07-25 05:53:53.902237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.357 [2024-07-25 05:53:53.902478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.357 [2024-07-25 05:53:53.902707] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.357 [2024-07-25 05:53:53.902728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.357 [2024-07-25 05:53:53.902741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.357 [2024-07-25 05:53:53.905668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.357 [2024-07-25 05:53:53.915075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.357 [2024-07-25 05:53:53.915525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.357 [2024-07-25 05:53:53.915569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.357 [2024-07-25 05:53:53.915586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.357 [2024-07-25 05:53:53.915821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.357 [2024-07-25 05:53:53.916016] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.357 [2024-07-25 05:53:53.916037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.357 [2024-07-25 05:53:53.916050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.357 [2024-07-25 05:53:53.919050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.357 [2024-07-25 05:53:53.928284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.357 [2024-07-25 05:53:53.928662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.357 [2024-07-25 05:53:53.928690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.357 [2024-07-25 05:53:53.928707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.357 [2024-07-25 05:53:53.928932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.357 [2024-07-25 05:53:53.929133] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.357 [2024-07-25 05:53:53.929153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.357 [2024-07-25 05:53:53.929167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.357 [2024-07-25 05:53:53.932163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.357 [2024-07-25 05:53:53.941648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.357 [2024-07-25 05:53:53.942039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.357 [2024-07-25 05:53:53.942067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.357 [2024-07-25 05:53:53.942084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.357 [2024-07-25 05:53:53.942350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.357 [2024-07-25 05:53:53.942550] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.357 [2024-07-25 05:53:53.942571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.357 [2024-07-25 05:53:53.942599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.357 [2024-07-25 05:53:53.945525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.357 [2024-07-25 05:53:53.954900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.357 [2024-07-25 05:53:53.955354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.357 [2024-07-25 05:53:53.955383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.357 [2024-07-25 05:53:53.955405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.357 [2024-07-25 05:53:53.955659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.357 [2024-07-25 05:53:53.955854] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.357 [2024-07-25 05:53:53.955875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.357 [2024-07-25 05:53:53.955888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.357 [2024-07-25 05:53:53.958840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.357 [2024-07-25 05:53:53.968101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.357 [2024-07-25 05:53:53.968536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.357 [2024-07-25 05:53:53.968564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.357 [2024-07-25 05:53:53.968580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.357 [2024-07-25 05:53:53.968828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.357 [2024-07-25 05:53:53.969022] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.357 [2024-07-25 05:53:53.969042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.357 [2024-07-25 05:53:53.969055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.357 [2024-07-25 05:53:53.972023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.357 [2024-07-25 05:53:53.981466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.357 [2024-07-25 05:53:53.981876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.357 [2024-07-25 05:53:53.981905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.357 [2024-07-25 05:53:53.981922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.357 [2024-07-25 05:53:53.982175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.357 [2024-07-25 05:53:53.982398] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.357 [2024-07-25 05:53:53.982420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.357 [2024-07-25 05:53:53.982433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.357 [2024-07-25 05:53:53.985396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.357 [2024-07-25 05:53:53.994707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.357 [2024-07-25 05:53:53.995124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.357 [2024-07-25 05:53:53.995152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.357 [2024-07-25 05:53:53.995168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.357 [2024-07-25 05:53:53.995414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.357 [2024-07-25 05:53:53.995627] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.357 [2024-07-25 05:53:53.995648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.357 [2024-07-25 05:53:53.995666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.357 [2024-07-25 05:53:53.998632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.357 [2024-07-25 05:53:54.008044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.357 [2024-07-25 05:53:54.008509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.357 [2024-07-25 05:53:54.008540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.357 [2024-07-25 05:53:54.008556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.357 [2024-07-25 05:53:54.008798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.357 [2024-07-25 05:53:54.009008] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.357 [2024-07-25 05:53:54.009029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.357 [2024-07-25 05:53:54.009042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.357 [2024-07-25 05:53:54.012005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.357 [2024-07-25 05:53:54.021283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.357 [2024-07-25 05:53:54.021705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.357 [2024-07-25 05:53:54.021733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.357 [2024-07-25 05:53:54.021750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.357 [2024-07-25 05:53:54.021988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.358 [2024-07-25 05:53:54.022184] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.358 [2024-07-25 05:53:54.022205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.358 [2024-07-25 05:53:54.022233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.358 [2024-07-25 05:53:54.025186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.358 [2024-07-25 05:53:54.034458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.358 [2024-07-25 05:53:54.034867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.358 [2024-07-25 05:53:54.034897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.358 [2024-07-25 05:53:54.034914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.358 [2024-07-25 05:53:54.035168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.358 [2024-07-25 05:53:54.035393] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.358 [2024-07-25 05:53:54.035416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.358 [2024-07-25 05:53:54.035429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.358 [2024-07-25 05:53:54.038396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.358 [2024-07-25 05:53:54.047739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.358 [2024-07-25 05:53:54.048151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.358 [2024-07-25 05:53:54.048179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.358 [2024-07-25 05:53:54.048195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.358 [2024-07-25 05:53:54.048456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.358 [2024-07-25 05:53:54.048670] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.358 [2024-07-25 05:53:54.048691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.358 [2024-07-25 05:53:54.048703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.358 [2024-07-25 05:53:54.051664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.655 [2024-07-25 05:53:54.061436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.655 [2024-07-25 05:53:54.061854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.655 [2024-07-25 05:53:54.061885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.655 [2024-07-25 05:53:54.061902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.655 [2024-07-25 05:53:54.062119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.655 [2024-07-25 05:53:54.062365] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.655 [2024-07-25 05:53:54.062400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.655 [2024-07-25 05:53:54.062427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.655 [2024-07-25 05:53:54.065954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.655 [2024-07-25 05:53:54.074859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.655 [2024-07-25 05:53:54.075273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.655 [2024-07-25 05:53:54.075304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.655 [2024-07-25 05:53:54.075321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.655 [2024-07-25 05:53:54.075552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.655 [2024-07-25 05:53:54.075766] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.655 [2024-07-25 05:53:54.075798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.655 [2024-07-25 05:53:54.075813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.655 [2024-07-25 05:53:54.079204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.655 [2024-07-25 05:53:54.088312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.655 [2024-07-25 05:53:54.088743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.655 [2024-07-25 05:53:54.088772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.655 [2024-07-25 05:53:54.088788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.655 [2024-07-25 05:53:54.089048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.655 [2024-07-25 05:53:54.089267] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.655 [2024-07-25 05:53:54.089303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.655 [2024-07-25 05:53:54.089318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.655 [2024-07-25 05:53:54.092417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.655 [2024-07-25 05:53:54.101694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.655 [2024-07-25 05:53:54.102089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.655 [2024-07-25 05:53:54.102120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.655 [2024-07-25 05:53:54.102137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.655 [2024-07-25 05:53:54.102394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.655 [2024-07-25 05:53:54.102624] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.655 [2024-07-25 05:53:54.102646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.655 [2024-07-25 05:53:54.102659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.655 [2024-07-25 05:53:54.105675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.655 [2024-07-25 05:53:54.114917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.655 [2024-07-25 05:53:54.115315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.655 [2024-07-25 05:53:54.115343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.655 [2024-07-25 05:53:54.115359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.655 [2024-07-25 05:53:54.115574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.655 [2024-07-25 05:53:54.115784] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.655 [2024-07-25 05:53:54.115815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.655 [2024-07-25 05:53:54.115828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.655 [2024-07-25 05:53:54.118796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.655 [2024-07-25 05:53:54.128170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.655 [2024-07-25 05:53:54.128589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.655 [2024-07-25 05:53:54.128618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.655 [2024-07-25 05:53:54.128635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.655 [2024-07-25 05:53:54.128888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.655 [2024-07-25 05:53:54.129082] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.655 [2024-07-25 05:53:54.129103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.655 [2024-07-25 05:53:54.129120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.655 [2024-07-25 05:53:54.132086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.655 [2024-07-25 05:53:54.141606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.655 [2024-07-25 05:53:54.142018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.656 [2024-07-25 05:53:54.142046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.656 [2024-07-25 05:53:54.142062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.656 [2024-07-25 05:53:54.142305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.656 [2024-07-25 05:53:54.142513] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.656 [2024-07-25 05:53:54.142558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.656 [2024-07-25 05:53:54.142573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.656 [2024-07-25 05:53:54.145554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.656 [2024-07-25 05:53:54.154845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.656 [2024-07-25 05:53:54.155247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.656 [2024-07-25 05:53:54.155277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.656 [2024-07-25 05:53:54.155294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.656 [2024-07-25 05:53:54.155536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.656 [2024-07-25 05:53:54.155745] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.656 [2024-07-25 05:53:54.155766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.656 [2024-07-25 05:53:54.155779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.656 [2024-07-25 05:53:54.158745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.656 [2024-07-25 05:53:54.168163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.656 [2024-07-25 05:53:54.168647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.656 [2024-07-25 05:53:54.168677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.656 [2024-07-25 05:53:54.168693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.656 [2024-07-25 05:53:54.168946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.656 [2024-07-25 05:53:54.169140] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.656 [2024-07-25 05:53:54.169161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.656 [2024-07-25 05:53:54.169174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.656 [2024-07-25 05:53:54.172140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.656 [2024-07-25 05:53:54.181404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.656 [2024-07-25 05:53:54.181880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.656 [2024-07-25 05:53:54.181914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.656 [2024-07-25 05:53:54.181931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.656 [2024-07-25 05:53:54.182183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.656 [2024-07-25 05:53:54.182407] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.656 [2024-07-25 05:53:54.182429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.656 [2024-07-25 05:53:54.182442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.656 [2024-07-25 05:53:54.185401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.656 [2024-07-25 05:53:54.194697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.656 [2024-07-25 05:53:54.195087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.656 [2024-07-25 05:53:54.195115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.656 [2024-07-25 05:53:54.195131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.656 [2024-07-25 05:53:54.195394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.656 [2024-07-25 05:53:54.195610] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.656 [2024-07-25 05:53:54.195630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.656 [2024-07-25 05:53:54.195644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.656 [2024-07-25 05:53:54.198607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.656 [2024-07-25 05:53:54.207913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.656 [2024-07-25 05:53:54.208430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.656 [2024-07-25 05:53:54.208460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.656 [2024-07-25 05:53:54.208477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.656 [2024-07-25 05:53:54.208730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.656 [2024-07-25 05:53:54.208924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.656 [2024-07-25 05:53:54.208945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.656 [2024-07-25 05:53:54.208958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.656 [2024-07-25 05:53:54.211926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.656 [2024-07-25 05:53:54.221168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.656 [2024-07-25 05:53:54.221581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.656 [2024-07-25 05:53:54.221609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.656 [2024-07-25 05:53:54.221624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.656 [2024-07-25 05:53:54.221858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.656 [2024-07-25 05:53:54.222063] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.656 [2024-07-25 05:53:54.222084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.656 [2024-07-25 05:53:54.222097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.656 [2024-07-25 05:53:54.225030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.656 [2024-07-25 05:53:54.234401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.656 [2024-07-25 05:53:54.234811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.656 [2024-07-25 05:53:54.234839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.656 [2024-07-25 05:53:54.234855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.656 [2024-07-25 05:53:54.235105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.656 [2024-07-25 05:53:54.235348] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.656 [2024-07-25 05:53:54.235372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.656 [2024-07-25 05:53:54.235387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.656 [2024-07-25 05:53:54.238379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.656 [2024-07-25 05:53:54.247686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.656 [2024-07-25 05:53:54.248145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.656 [2024-07-25 05:53:54.248174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.656 [2024-07-25 05:53:54.248189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.656 [2024-07-25 05:53:54.248455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.656 [2024-07-25 05:53:54.248666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.656 [2024-07-25 05:53:54.248687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.656 [2024-07-25 05:53:54.248700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.656 [2024-07-25 05:53:54.251659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.656 [2024-07-25 05:53:54.260901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.656 [2024-07-25 05:53:54.261356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.656 [2024-07-25 05:53:54.261385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.656 [2024-07-25 05:53:54.261401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.656 [2024-07-25 05:53:54.261656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.656 [2024-07-25 05:53:54.261859] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.656 [2024-07-25 05:53:54.261881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.656 [2024-07-25 05:53:54.261894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.656 [2024-07-25 05:53:54.264864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.656 [2024-07-25 05:53:54.274265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.657 [2024-07-25 05:53:54.274617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.657 [2024-07-25 05:53:54.274645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.657 [2024-07-25 05:53:54.274661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.657 [2024-07-25 05:53:54.274888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.657 [2024-07-25 05:53:54.275099] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.657 [2024-07-25 05:53:54.275119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.657 [2024-07-25 05:53:54.275132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.657 [2024-07-25 05:53:54.278097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.657 [2024-07-25 05:53:54.287496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.657 [2024-07-25 05:53:54.287902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.657 [2024-07-25 05:53:54.287931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.657 [2024-07-25 05:53:54.287947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.657 [2024-07-25 05:53:54.288199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.657 [2024-07-25 05:53:54.288421] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.657 [2024-07-25 05:53:54.288442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.657 [2024-07-25 05:53:54.288455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.657 [2024-07-25 05:53:54.291456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.657 [2024-07-25 05:53:54.300712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.657 [2024-07-25 05:53:54.301100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.657 [2024-07-25 05:53:54.301128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.657 [2024-07-25 05:53:54.301145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.657 [2024-07-25 05:53:54.301409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.657 [2024-07-25 05:53:54.301624] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.657 [2024-07-25 05:53:54.301644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.657 [2024-07-25 05:53:54.301657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.657 [2024-07-25 05:53:54.304660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.657 [2024-07-25 05:53:54.313946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.657 [2024-07-25 05:53:54.314301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.657 [2024-07-25 05:53:54.314331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.657 [2024-07-25 05:53:54.314352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.657 [2024-07-25 05:53:54.314589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.657 [2024-07-25 05:53:54.314801] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.657 [2024-07-25 05:53:54.314821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.657 [2024-07-25 05:53:54.314834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.657 [2024-07-25 05:53:54.317798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.657 [2024-07-25 05:53:54.327123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.657 [2024-07-25 05:53:54.327581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.657 [2024-07-25 05:53:54.327611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.657 [2024-07-25 05:53:54.327628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.657 [2024-07-25 05:53:54.327845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.657 [2024-07-25 05:53:54.328064] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.657 [2024-07-25 05:53:54.328088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.657 [2024-07-25 05:53:54.328102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.657 [2024-07-25 05:53:54.331461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.916 [2024-07-25 05:53:54.340628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.916 [2024-07-25 05:53:54.341099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.916 [2024-07-25 05:53:54.341131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.916 [2024-07-25 05:53:54.341149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.916 [2024-07-25 05:53:54.341380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.916 [2024-07-25 05:53:54.341626] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.916 [2024-07-25 05:53:54.341647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.916 [2024-07-25 05:53:54.341661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.916 [2024-07-25 05:53:54.344858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.916 [2024-07-25 05:53:54.354028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.916 [2024-07-25 05:53:54.354414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.916 [2024-07-25 05:53:54.354444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.916 [2024-07-25 05:53:54.354461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.916 [2024-07-25 05:53:54.354715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.916 [2024-07-25 05:53:54.354911] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.916 [2024-07-25 05:53:54.354937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.916 [2024-07-25 05:53:54.354951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.916 [2024-07-25 05:53:54.357958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.916 [2024-07-25 05:53:54.367417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.916 [2024-07-25 05:53:54.367833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.916 [2024-07-25 05:53:54.367863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.916 [2024-07-25 05:53:54.367879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.916 [2024-07-25 05:53:54.368128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.916 [2024-07-25 05:53:54.368355] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.916 [2024-07-25 05:53:54.368377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.916 [2024-07-25 05:53:54.368391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.916 [2024-07-25 05:53:54.371427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.916 [2024-07-25 05:53:54.380677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.916 [2024-07-25 05:53:54.381070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.916 [2024-07-25 05:53:54.381100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.916 [2024-07-25 05:53:54.381116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.916 [2024-07-25 05:53:54.381382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.916 [2024-07-25 05:53:54.381597] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.916 [2024-07-25 05:53:54.381617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.916 [2024-07-25 05:53:54.381630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.916 [2024-07-25 05:53:54.384592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.916 [2024-07-25 05:53:54.393880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.916 [2024-07-25 05:53:54.394359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.916 [2024-07-25 05:53:54.394388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.916 [2024-07-25 05:53:54.394404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.916 [2024-07-25 05:53:54.394659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.916 [2024-07-25 05:53:54.394870] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.916 [2024-07-25 05:53:54.394891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.916 [2024-07-25 05:53:54.394903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.916 [2024-07-25 05:53:54.397900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.916 [2024-07-25 05:53:54.407212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.916 [2024-07-25 05:53:54.407660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.916 [2024-07-25 05:53:54.407689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.916 [2024-07-25 05:53:54.407705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.916 [2024-07-25 05:53:54.407959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.916 [2024-07-25 05:53:54.408155] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.916 [2024-07-25 05:53:54.408176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.916 [2024-07-25 05:53:54.408188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.916 [2024-07-25 05:53:54.411155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.916 [2024-07-25 05:53:54.420452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.916 [2024-07-25 05:53:54.420931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.916 [2024-07-25 05:53:54.420961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.916 [2024-07-25 05:53:54.420977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.916 [2024-07-25 05:53:54.421231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.916 [2024-07-25 05:53:54.421451] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.916 [2024-07-25 05:53:54.421472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.916 [2024-07-25 05:53:54.421486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.916 [2024-07-25 05:53:54.424466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.916 [2024-07-25 05:53:54.433775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.916 [2024-07-25 05:53:54.434166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.916 [2024-07-25 05:53:54.434194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.916 [2024-07-25 05:53:54.434209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.916 [2024-07-25 05:53:54.434486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.916 [2024-07-25 05:53:54.434699] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.917 [2024-07-25 05:53:54.434720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.917 [2024-07-25 05:53:54.434735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.917 [2024-07-25 05:53:54.437701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.917 [2024-07-25 05:53:54.446979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.917 [2024-07-25 05:53:54.447433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.917 [2024-07-25 05:53:54.447463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.917 [2024-07-25 05:53:54.447479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.917 [2024-07-25 05:53:54.447737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.917 [2024-07-25 05:53:54.447931] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.917 [2024-07-25 05:53:54.447952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.917 [2024-07-25 05:53:54.447964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.917 [2024-07-25 05:53:54.450935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.917 [2024-07-25 05:53:54.460179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.917 [2024-07-25 05:53:54.460614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.917 [2024-07-25 05:53:54.460643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.917 [2024-07-25 05:53:54.460659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.917 [2024-07-25 05:53:54.460896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.917 [2024-07-25 05:53:54.461090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.917 [2024-07-25 05:53:54.461111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.917 [2024-07-25 05:53:54.461125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.917 [2024-07-25 05:53:54.464111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.917 [2024-07-25 05:53:54.473384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.917 [2024-07-25 05:53:54.473822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.917 [2024-07-25 05:53:54.473850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.917 [2024-07-25 05:53:54.473866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.917 [2024-07-25 05:53:54.474101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.917 [2024-07-25 05:53:54.474324] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.917 [2024-07-25 05:53:54.474345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.917 [2024-07-25 05:53:54.474358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.917 [2024-07-25 05:53:54.477317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.917 [2024-07-25 05:53:54.486525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.917 [2024-07-25 05:53:54.486880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.917 [2024-07-25 05:53:54.486908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.917 [2024-07-25 05:53:54.486924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.917 [2024-07-25 05:53:54.487154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.917 [2024-07-25 05:53:54.487379] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.917 [2024-07-25 05:53:54.487402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.917 [2024-07-25 05:53:54.487419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.917 [2024-07-25 05:53:54.490401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.917 [2024-07-25 05:53:54.499841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.917 [2024-07-25 05:53:54.500268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.917 [2024-07-25 05:53:54.500297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.917 [2024-07-25 05:53:54.500313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.917 [2024-07-25 05:53:54.500551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.917 [2024-07-25 05:53:54.500760] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.917 [2024-07-25 05:53:54.500781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.917 [2024-07-25 05:53:54.500793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.917 [2024-07-25 05:53:54.503801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.917 [2024-07-25 05:53:54.513043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.917 [2024-07-25 05:53:54.513536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.917 [2024-07-25 05:53:54.513564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.917 [2024-07-25 05:53:54.513580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.917 [2024-07-25 05:53:54.513829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.917 [2024-07-25 05:53:54.514022] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.917 [2024-07-25 05:53:54.514042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.917 [2024-07-25 05:53:54.514055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.917 [2024-07-25 05:53:54.517021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.917 [2024-07-25 05:53:54.526353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.917 [2024-07-25 05:53:54.526776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.917 [2024-07-25 05:53:54.526805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.917 [2024-07-25 05:53:54.526821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.917 [2024-07-25 05:53:54.527073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.917 [2024-07-25 05:53:54.527295] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.917 [2024-07-25 05:53:54.527318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.917 [2024-07-25 05:53:54.527331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.917 [2024-07-25 05:53:54.530294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.917 [2024-07-25 05:53:54.539723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.917 [2024-07-25 05:53:54.540177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.917 [2024-07-25 05:53:54.540210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.917 [2024-07-25 05:53:54.540226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.917 [2024-07-25 05:53:54.540490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.917 [2024-07-25 05:53:54.540720] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.917 [2024-07-25 05:53:54.540741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.917 [2024-07-25 05:53:54.540754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.917 [2024-07-25 05:53:54.543718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.917 [2024-07-25 05:53:54.552962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.917 [2024-07-25 05:53:54.553323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.917 [2024-07-25 05:53:54.553352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.917 [2024-07-25 05:53:54.553369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.917 [2024-07-25 05:53:54.553610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.917 [2024-07-25 05:53:54.553804] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.917 [2024-07-25 05:53:54.553825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.917 [2024-07-25 05:53:54.553839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.917 [2024-07-25 05:53:54.556809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.917 [2024-07-25 05:53:54.566289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.917 [2024-07-25 05:53:54.566707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.917 [2024-07-25 05:53:54.566737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.917 [2024-07-25 05:53:54.566753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.917 [2024-07-25 05:53:54.566996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.918 [2024-07-25 05:53:54.567195] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.918 [2024-07-25 05:53:54.567217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.918 [2024-07-25 05:53:54.567230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.918 [2024-07-25 05:53:54.570304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.918 [2024-07-25 05:53:54.579585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.918 [2024-07-25 05:53:54.580059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.918 [2024-07-25 05:53:54.580088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.918 [2024-07-25 05:53:54.580104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.918 [2024-07-25 05:53:54.580342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.918 [2024-07-25 05:53:54.580578] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.918 [2024-07-25 05:53:54.580601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.918 [2024-07-25 05:53:54.580615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.918 [2024-07-25 05:53:54.584024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.918 [2024-07-25 05:53:54.592967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.918 [2024-07-25 05:53:54.593365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.918 [2024-07-25 05:53:54.593402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.918 [2024-07-25 05:53:54.593419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.918 [2024-07-25 05:53:54.593671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.918 [2024-07-25 05:53:54.593866] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.918 [2024-07-25 05:53:54.593887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.918 [2024-07-25 05:53:54.593899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.918 [2024-07-25 05:53:54.596972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.918 [2024-07-25 05:53:54.606203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.918 [2024-07-25 05:53:54.606622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.918 [2024-07-25 05:53:54.606651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:00.918 [2024-07-25 05:53:54.606667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:00.918 [2024-07-25 05:53:54.606926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:00.918 [2024-07-25 05:53:54.607120] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.918 [2024-07-25 05:53:54.607141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.918 [2024-07-25 05:53:54.607155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.918 [2024-07-25 05:53:54.610127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.177 [2024-07-25 05:53:54.619832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.177 [2024-07-25 05:53:54.620199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.177 [2024-07-25 05:53:54.620255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.177 [2024-07-25 05:53:54.620275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.177 [2024-07-25 05:53:54.620531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.177 [2024-07-25 05:53:54.620745] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.177 [2024-07-25 05:53:54.620767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.177 [2024-07-25 05:53:54.620780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.177 [2024-07-25 05:53:54.624150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.177 [2024-07-25 05:53:54.633723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.177 [2024-07-25 05:53:54.634167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.177 [2024-07-25 05:53:54.634200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.177 [2024-07-25 05:53:54.634219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.177 [2024-07-25 05:53:54.634469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.177 [2024-07-25 05:53:54.634713] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.177 [2024-07-25 05:53:54.634739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.177 [2024-07-25 05:53:54.634755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.177 [2024-07-25 05:53:54.638343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.177 [2024-07-25 05:53:54.647652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.177 [2024-07-25 05:53:54.648156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.177 [2024-07-25 05:53:54.648207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.177 [2024-07-25 05:53:54.648225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.177 [2024-07-25 05:53:54.648480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.177 [2024-07-25 05:53:54.648725] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.177 [2024-07-25 05:53:54.648750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.177 [2024-07-25 05:53:54.648766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.177 [2024-07-25 05:53:54.652359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.177 [2024-07-25 05:53:54.661673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.177 [2024-07-25 05:53:54.662148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.177 [2024-07-25 05:53:54.662198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.177 [2024-07-25 05:53:54.662217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.177 [2024-07-25 05:53:54.662470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.177 [2024-07-25 05:53:54.662714] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.177 [2024-07-25 05:53:54.662740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.177 [2024-07-25 05:53:54.662756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.177 [2024-07-25 05:53:54.666363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.177 [2024-07-25 05:53:54.675671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.177 [2024-07-25 05:53:54.676114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.177 [2024-07-25 05:53:54.676142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.177 [2024-07-25 05:53:54.676162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.177 [2024-07-25 05:53:54.676446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.177 [2024-07-25 05:53:54.676691] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.177 [2024-07-25 05:53:54.676717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.177 [2024-07-25 05:53:54.676733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.177 [2024-07-25 05:53:54.680325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.177 [2024-07-25 05:53:54.689634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.177 [2024-07-25 05:53:54.690072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.177 [2024-07-25 05:53:54.690104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.177 [2024-07-25 05:53:54.690122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.177 [2024-07-25 05:53:54.690374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.177 [2024-07-25 05:53:54.690618] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.177 [2024-07-25 05:53:54.690643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.177 [2024-07-25 05:53:54.690659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.177 [2024-07-25 05:53:54.694249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.177 [2024-07-25 05:53:54.703557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.177 [2024-07-25 05:53:54.703993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.177 [2024-07-25 05:53:54.704025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.177 [2024-07-25 05:53:54.704043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.177 [2024-07-25 05:53:54.704295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.177 [2024-07-25 05:53:54.704538] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.177 [2024-07-25 05:53:54.704564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.177 [2024-07-25 05:53:54.704580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.177 [2024-07-25 05:53:54.708161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.177 [2024-07-25 05:53:54.717502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.177 [2024-07-25 05:53:54.717949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.177 [2024-07-25 05:53:54.717977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.177 [2024-07-25 05:53:54.717993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.177 [2024-07-25 05:53:54.718259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.177 [2024-07-25 05:53:54.718503] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.177 [2024-07-25 05:53:54.718535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.177 [2024-07-25 05:53:54.718552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.177 [2024-07-25 05:53:54.722140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.177 [2024-07-25 05:53:54.731463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.177 [2024-07-25 05:53:54.731894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.177 [2024-07-25 05:53:54.731926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.177 [2024-07-25 05:53:54.731944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.177 [2024-07-25 05:53:54.732184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.177 [2024-07-25 05:53:54.732442] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.177 [2024-07-25 05:53:54.732468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.177 [2024-07-25 05:53:54.732485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.177 [2024-07-25 05:53:54.736069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.177 [2024-07-25 05:53:54.745392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.177 [2024-07-25 05:53:54.745887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.177 [2024-07-25 05:53:54.745915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.177 [2024-07-25 05:53:54.745931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.178 [2024-07-25 05:53:54.746186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.178 [2024-07-25 05:53:54.746443] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.178 [2024-07-25 05:53:54.746469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.178 [2024-07-25 05:53:54.746486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.178 [2024-07-25 05:53:54.750074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.178 [2024-07-25 05:53:54.759392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.178 [2024-07-25 05:53:54.759842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.178 [2024-07-25 05:53:54.759876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.178 [2024-07-25 05:53:54.759893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.178 [2024-07-25 05:53:54.760134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.178 [2024-07-25 05:53:54.760390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.178 [2024-07-25 05:53:54.760416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.178 [2024-07-25 05:53:54.760433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.178 [2024-07-25 05:53:54.764034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.178 [2024-07-25 05:53:54.773323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.178 [2024-07-25 05:53:54.773739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.178 [2024-07-25 05:53:54.773772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.178 [2024-07-25 05:53:54.773791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.178 [2024-07-25 05:53:54.774031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.178 [2024-07-25 05:53:54.774290] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.178 [2024-07-25 05:53:54.774316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.178 [2024-07-25 05:53:54.774332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.178 [2024-07-25 05:53:54.777915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.178 [2024-07-25 05:53:54.787278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.178 [2024-07-25 05:53:54.787732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.178 [2024-07-25 05:53:54.787764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.178 [2024-07-25 05:53:54.787783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.178 [2024-07-25 05:53:54.788023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.178 [2024-07-25 05:53:54.788280] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.178 [2024-07-25 05:53:54.788307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.178 [2024-07-25 05:53:54.788323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.178 [2024-07-25 05:53:54.791905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.178 [2024-07-25 05:53:54.801215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.178 [2024-07-25 05:53:54.801655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.178 [2024-07-25 05:53:54.801687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.178 [2024-07-25 05:53:54.801705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.178 [2024-07-25 05:53:54.801944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.178 [2024-07-25 05:53:54.802187] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.178 [2024-07-25 05:53:54.802213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.178 [2024-07-25 05:53:54.802229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.178 [2024-07-25 05:53:54.805827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.178 [2024-07-25 05:53:54.815139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.178 [2024-07-25 05:53:54.815581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.178 [2024-07-25 05:53:54.815613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.178 [2024-07-25 05:53:54.815633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.178 [2024-07-25 05:53:54.815879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.178 [2024-07-25 05:53:54.816123] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.178 [2024-07-25 05:53:54.816146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.178 [2024-07-25 05:53:54.816162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.178 [2024-07-25 05:53:54.819768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.178 [2024-07-25 05:53:54.829080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.178 [2024-07-25 05:53:54.829524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.178 [2024-07-25 05:53:54.829556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.178 [2024-07-25 05:53:54.829574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.178 [2024-07-25 05:53:54.829814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.178 [2024-07-25 05:53:54.830057] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.178 [2024-07-25 05:53:54.830080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.178 [2024-07-25 05:53:54.830096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.178 [2024-07-25 05:53:54.833687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.178 [2024-07-25 05:53:54.843000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.178 [2024-07-25 05:53:54.843440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.178 [2024-07-25 05:53:54.843471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.178 [2024-07-25 05:53:54.843488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.178 [2024-07-25 05:53:54.843755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.178 [2024-07-25 05:53:54.844001] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.178 [2024-07-25 05:53:54.844027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.178 [2024-07-25 05:53:54.844043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.178 [2024-07-25 05:53:54.847639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.178 [2024-07-25 05:53:54.856949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.178 [2024-07-25 05:53:54.857378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.178 [2024-07-25 05:53:54.857411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.178 [2024-07-25 05:53:54.857429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.178 [2024-07-25 05:53:54.857669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.178 [2024-07-25 05:53:54.857913] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.178 [2024-07-25 05:53:54.857937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.178 [2024-07-25 05:53:54.857959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.178 [2024-07-25 05:53:54.861791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.178 [2024-07-25 05:53:54.870921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.178 [2024-07-25 05:53:54.871339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.178 [2024-07-25 05:53:54.871371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.178 [2024-07-25 05:53:54.871390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.178 [2024-07-25 05:53:54.871630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.178 [2024-07-25 05:53:54.871874] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.178 [2024-07-25 05:53:54.871899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.178 [2024-07-25 05:53:54.871915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.178 [2024-07-25 05:53:54.875576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.438 [2024-07-25 05:53:54.884885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.438 [2024-07-25 05:53:54.885335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.438 [2024-07-25 05:53:54.885367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.438 [2024-07-25 05:53:54.885384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.438 [2024-07-25 05:53:54.885625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.438 [2024-07-25 05:53:54.885883] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.438 [2024-07-25 05:53:54.885908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.438 [2024-07-25 05:53:54.885924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.438 [2024-07-25 05:53:54.889522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.438 [2024-07-25 05:53:54.898836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.438 [2024-07-25 05:53:54.899258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.438 [2024-07-25 05:53:54.899291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.438 [2024-07-25 05:53:54.899310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.438 [2024-07-25 05:53:54.899563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.438 [2024-07-25 05:53:54.899777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.438 [2024-07-25 05:53:54.899811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.438 [2024-07-25 05:53:54.899824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.438 [2024-07-25 05:53:54.903410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.438 [2024-07-25 05:53:54.912738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.438 [2024-07-25 05:53:54.913151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.438 [2024-07-25 05:53:54.913190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.438 [2024-07-25 05:53:54.913209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.438 [2024-07-25 05:53:54.913459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.438 [2024-07-25 05:53:54.913705] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.438 [2024-07-25 05:53:54.913730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.438 [2024-07-25 05:53:54.913746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.438 [2024-07-25 05:53:54.917347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.438 [2024-07-25 05:53:54.926673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.438 [2024-07-25 05:53:54.927178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.438 [2024-07-25 05:53:54.927230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.438 [2024-07-25 05:53:54.927260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.438 [2024-07-25 05:53:54.927502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.438 [2024-07-25 05:53:54.927747] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.438 [2024-07-25 05:53:54.927771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.438 [2024-07-25 05:53:54.927787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.438 [2024-07-25 05:53:54.931386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.438 [2024-07-25 05:53:54.940713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.438 [2024-07-25 05:53:54.941164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.438 [2024-07-25 05:53:54.941196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.438 [2024-07-25 05:53:54.941214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.438 [2024-07-25 05:53:54.941467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.438 [2024-07-25 05:53:54.941712] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.438 [2024-07-25 05:53:54.941738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.438 [2024-07-25 05:53:54.941754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.438 [2024-07-25 05:53:54.945346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.438 [2024-07-25 05:53:54.954669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.438 [2024-07-25 05:53:54.955164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.438 [2024-07-25 05:53:54.955215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.439 [2024-07-25 05:53:54.955233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.439 [2024-07-25 05:53:54.955481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.439 [2024-07-25 05:53:54.955732] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.439 [2024-07-25 05:53:54.955757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.439 [2024-07-25 05:53:54.955774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.439 [2024-07-25 05:53:54.959368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.439 [2024-07-25 05:53:54.968699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.439 [2024-07-25 05:53:54.969121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.439 [2024-07-25 05:53:54.969153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.439 [2024-07-25 05:53:54.969172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.439 [2024-07-25 05:53:54.969421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.439 [2024-07-25 05:53:54.969667] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.439 [2024-07-25 05:53:54.969691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.439 [2024-07-25 05:53:54.969708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.439 [2024-07-25 05:53:54.973307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.439 [2024-07-25 05:53:54.982613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.439 [2024-07-25 05:53:54.983133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.439 [2024-07-25 05:53:54.983183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.439 [2024-07-25 05:53:54.983202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.439 [2024-07-25 05:53:54.983450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.439 [2024-07-25 05:53:54.983695] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.439 [2024-07-25 05:53:54.983720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.439 [2024-07-25 05:53:54.983736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.439 [2024-07-25 05:53:54.987330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.439 [2024-07-25 05:53:54.996644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.439 [2024-07-25 05:53:54.997181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.439 [2024-07-25 05:53:54.997234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.439 [2024-07-25 05:53:54.997264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.439 [2024-07-25 05:53:54.997505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.439 [2024-07-25 05:53:54.997749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.439 [2024-07-25 05:53:54.997774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.439 [2024-07-25 05:53:54.997790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.439 [2024-07-25 05:53:55.001379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.439 [2024-07-25 05:53:55.010699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.439 [2024-07-25 05:53:55.011265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.439 [2024-07-25 05:53:55.011327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.439 [2024-07-25 05:53:55.011345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.439 [2024-07-25 05:53:55.011586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.439 [2024-07-25 05:53:55.011831] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.439 [2024-07-25 05:53:55.011856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.439 [2024-07-25 05:53:55.011872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.439 [2024-07-25 05:53:55.015463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.439 [2024-07-25 05:53:55.024563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.439 [2024-07-25 05:53:55.024995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.439 [2024-07-25 05:53:55.025028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.439 [2024-07-25 05:53:55.025046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.439 [2024-07-25 05:53:55.025299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.439 [2024-07-25 05:53:55.025543] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.439 [2024-07-25 05:53:55.025568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.439 [2024-07-25 05:53:55.025584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.439 [2024-07-25 05:53:55.029174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.439 [2024-07-25 05:53:55.038503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.439 [2024-07-25 05:53:55.038954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.439 [2024-07-25 05:53:55.038985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.439 [2024-07-25 05:53:55.039003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.439 [2024-07-25 05:53:55.039255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.439 [2024-07-25 05:53:55.039500] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.439 [2024-07-25 05:53:55.039526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.439 [2024-07-25 05:53:55.039542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.439 [2024-07-25 05:53:55.043132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.439 [2024-07-25 05:53:55.052465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.439 [2024-07-25 05:53:55.052921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.439 [2024-07-25 05:53:55.052953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.439 [2024-07-25 05:53:55.052979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.439 [2024-07-25 05:53:55.053221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.439 [2024-07-25 05:53:55.053478] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.439 [2024-07-25 05:53:55.053504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.439 [2024-07-25 05:53:55.053520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.439 [2024-07-25 05:53:55.057105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.439 [2024-07-25 05:53:55.066436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.439 [2024-07-25 05:53:55.066870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.439 [2024-07-25 05:53:55.066903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.439 [2024-07-25 05:53:55.066921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.439 [2024-07-25 05:53:55.067161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.439 [2024-07-25 05:53:55.067419] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.439 [2024-07-25 05:53:55.067445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.439 [2024-07-25 05:53:55.067462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.439 [2024-07-25 05:53:55.071047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.439 [2024-07-25 05:53:55.080369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.439 [2024-07-25 05:53:55.080750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.439 [2024-07-25 05:53:55.080782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.439 [2024-07-25 05:53:55.080800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.439 [2024-07-25 05:53:55.081040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.439 [2024-07-25 05:53:55.081299] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.439 [2024-07-25 05:53:55.081325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.439 [2024-07-25 05:53:55.081342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.439 [2024-07-25 05:53:55.084926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.439 [2024-07-25 05:53:55.094256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.439 [2024-07-25 05:53:55.094690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.439 [2024-07-25 05:53:55.094722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.440 [2024-07-25 05:53:55.094741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.440 [2024-07-25 05:53:55.094982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.440 [2024-07-25 05:53:55.095226] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.440 [2024-07-25 05:53:55.095270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.440 [2024-07-25 05:53:55.095289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.440 [2024-07-25 05:53:55.098875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.440 [2024-07-25 05:53:55.108184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.440 [2024-07-25 05:53:55.108595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.440 [2024-07-25 05:53:55.108628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.440 [2024-07-25 05:53:55.108647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.440 [2024-07-25 05:53:55.108887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.440 [2024-07-25 05:53:55.109132] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.440 [2024-07-25 05:53:55.109157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.440 [2024-07-25 05:53:55.109174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.440 [2024-07-25 05:53:55.112770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.440 [2024-07-25 05:53:55.122077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.440 [2024-07-25 05:53:55.122515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.440 [2024-07-25 05:53:55.122548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.440 [2024-07-25 05:53:55.122566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.440 [2024-07-25 05:53:55.122806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.440 [2024-07-25 05:53:55.123049] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.440 [2024-07-25 05:53:55.123074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.440 [2024-07-25 05:53:55.123090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.440 [2024-07-25 05:53:55.126683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.440 [2024-07-25 05:53:55.136042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.440 [2024-07-25 05:53:55.136446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.440 [2024-07-25 05:53:55.136480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.440 [2024-07-25 05:53:55.136500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.440 [2024-07-25 05:53:55.136741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.440 [2024-07-25 05:53:55.137026] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.440 [2024-07-25 05:53:55.137056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.440 [2024-07-25 05:53:55.137073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.698 [2024-07-25 05:53:55.140766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.698 [2024-07-25 05:53:55.149939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.698 [2024-07-25 05:53:55.150408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.698 [2024-07-25 05:53:55.150444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.698 [2024-07-25 05:53:55.150463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.698 [2024-07-25 05:53:55.150704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.698 [2024-07-25 05:53:55.150948] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.698 [2024-07-25 05:53:55.150973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.698 [2024-07-25 05:53:55.150989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.698 [2024-07-25 05:53:55.154594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.698 [2024-07-25 05:53:55.163911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.698 [2024-07-25 05:53:55.164368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.698 [2024-07-25 05:53:55.164402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.698 [2024-07-25 05:53:55.164420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.699 [2024-07-25 05:53:55.164661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.699 [2024-07-25 05:53:55.164915] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.699 [2024-07-25 05:53:55.164941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.699 [2024-07-25 05:53:55.164958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.699 [2024-07-25 05:53:55.168556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.699 [2024-07-25 05:53:55.177866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.699 [2024-07-25 05:53:55.178304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.699 [2024-07-25 05:53:55.178337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.699 [2024-07-25 05:53:55.178355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.699 [2024-07-25 05:53:55.178595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.699 [2024-07-25 05:53:55.178839] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.699 [2024-07-25 05:53:55.178864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.699 [2024-07-25 05:53:55.178880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.699 [2024-07-25 05:53:55.182481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.699 [2024-07-25 05:53:55.191790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.699 [2024-07-25 05:53:55.192196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.699 [2024-07-25 05:53:55.192228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.699 [2024-07-25 05:53:55.192258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.699 [2024-07-25 05:53:55.192509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.699 [2024-07-25 05:53:55.192752] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.699 [2024-07-25 05:53:55.192778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.699 [2024-07-25 05:53:55.192794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.699 [2024-07-25 05:53:55.196388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.699 [2024-07-25 05:53:55.205696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.699 [2024-07-25 05:53:55.206287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.699 [2024-07-25 05:53:55.206319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.699 [2024-07-25 05:53:55.206337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.699 [2024-07-25 05:53:55.206577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.699 [2024-07-25 05:53:55.206820] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.699 [2024-07-25 05:53:55.206845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.699 [2024-07-25 05:53:55.206861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.699 [2024-07-25 05:53:55.210455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.699 [2024-07-25 05:53:55.219552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.699 [2024-07-25 05:53:55.219982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.699 [2024-07-25 05:53:55.220014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.699 [2024-07-25 05:53:55.220031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.699 [2024-07-25 05:53:55.220285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.699 [2024-07-25 05:53:55.220529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.699 [2024-07-25 05:53:55.220555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.699 [2024-07-25 05:53:55.220571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.699 [2024-07-25 05:53:55.224155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.699 [2024-07-25 05:53:55.233477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.699 [2024-07-25 05:53:55.233894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.699 [2024-07-25 05:53:55.233926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.699 [2024-07-25 05:53:55.233944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.699 [2024-07-25 05:53:55.234185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.699 [2024-07-25 05:53:55.234443] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.699 [2024-07-25 05:53:55.234469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.699 [2024-07-25 05:53:55.234491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.699 [2024-07-25 05:53:55.238076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.699 [2024-07-25 05:53:55.247395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.699 [2024-07-25 05:53:55.247915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.699 [2024-07-25 05:53:55.247970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.699 [2024-07-25 05:53:55.247987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.699 [2024-07-25 05:53:55.248228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.699 [2024-07-25 05:53:55.248484] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.699 [2024-07-25 05:53:55.248510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.699 [2024-07-25 05:53:55.248526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.699 [2024-07-25 05:53:55.252115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.699 [2024-07-25 05:53:55.261438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.699 [2024-07-25 05:53:55.261960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.699 [2024-07-25 05:53:55.262011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.699 [2024-07-25 05:53:55.262030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.699 [2024-07-25 05:53:55.262287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.699 [2024-07-25 05:53:55.262532] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.699 [2024-07-25 05:53:55.262558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.699 [2024-07-25 05:53:55.262574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.699 [2024-07-25 05:53:55.266174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.699 [2024-07-25 05:53:55.275488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.699 [2024-07-25 05:53:55.276051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.699 [2024-07-25 05:53:55.276104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.699 [2024-07-25 05:53:55.276122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.699 [2024-07-25 05:53:55.276376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.699 [2024-07-25 05:53:55.276620] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.699 [2024-07-25 05:53:55.276645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.699 [2024-07-25 05:53:55.276661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.699 [2024-07-25 05:53:55.280253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.699 [2024-07-25 05:53:55.289347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.699 [2024-07-25 05:53:55.289786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.699 [2024-07-25 05:53:55.289822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.699 [2024-07-25 05:53:55.289841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.699 [2024-07-25 05:53:55.290081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.699 [2024-07-25 05:53:55.290340] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.699 [2024-07-25 05:53:55.290367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.699 [2024-07-25 05:53:55.290384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.699 [2024-07-25 05:53:55.293984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.699 [2024-07-25 05:53:55.303302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.699 [2024-07-25 05:53:55.303715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.699 [2024-07-25 05:53:55.303747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.699 [2024-07-25 05:53:55.303766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.700 [2024-07-25 05:53:55.304006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.700 [2024-07-25 05:53:55.304264] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.700 [2024-07-25 05:53:55.304291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.700 [2024-07-25 05:53:55.304306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.700 [2024-07-25 05:53:55.307891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.700 [2024-07-25 05:53:55.317200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.700 [2024-07-25 05:53:55.317733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.700 [2024-07-25 05:53:55.317784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.700 [2024-07-25 05:53:55.317802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.700 [2024-07-25 05:53:55.318043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.700 [2024-07-25 05:53:55.318301] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.700 [2024-07-25 05:53:55.318326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.700 [2024-07-25 05:53:55.318342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.700 [2024-07-25 05:53:55.321924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.700 [2024-07-25 05:53:55.331234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.700 [2024-07-25 05:53:55.331769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.700 [2024-07-25 05:53:55.331823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.700 [2024-07-25 05:53:55.331841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.700 [2024-07-25 05:53:55.332080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.700 [2024-07-25 05:53:55.332343] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.700 [2024-07-25 05:53:55.332369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.700 [2024-07-25 05:53:55.332385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.700 [2024-07-25 05:53:55.335968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.700 [2024-07-25 05:53:55.345288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.700 [2024-07-25 05:53:55.345715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.700 [2024-07-25 05:53:55.345747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.700 [2024-07-25 05:53:55.345765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.700 [2024-07-25 05:53:55.346005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.700 [2024-07-25 05:53:55.346261] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.700 [2024-07-25 05:53:55.346287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.700 [2024-07-25 05:53:55.346303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.700 [2024-07-25 05:53:55.349893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.700 [2024-07-25 05:53:55.359386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.700 [2024-07-25 05:53:55.359830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.700 [2024-07-25 05:53:55.359863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.700 [2024-07-25 05:53:55.359882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.700 [2024-07-25 05:53:55.360122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.700 [2024-07-25 05:53:55.360381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.700 [2024-07-25 05:53:55.360407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.700 [2024-07-25 05:53:55.360423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.700 [2024-07-25 05:53:55.364008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.700 [2024-07-25 05:53:55.373345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.700 [2024-07-25 05:53:55.373775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.700 [2024-07-25 05:53:55.373807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.700 [2024-07-25 05:53:55.373825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.700 [2024-07-25 05:53:55.374066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.700 [2024-07-25 05:53:55.374324] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.700 [2024-07-25 05:53:55.374350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.700 [2024-07-25 05:53:55.374366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.700 [2024-07-25 05:53:55.377958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.700 [2024-07-25 05:53:55.387278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.700 [2024-07-25 05:53:55.387709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.700 [2024-07-25 05:53:55.387740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.700 [2024-07-25 05:53:55.387759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.700 [2024-07-25 05:53:55.387998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.700 [2024-07-25 05:53:55.388254] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.700 [2024-07-25 05:53:55.388279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.700 [2024-07-25 05:53:55.388296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.700 [2024-07-25 05:53:55.391883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.959 [2024-07-25 05:53:55.401389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.959 [2024-07-25 05:53:55.401885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.959 [2024-07-25 05:53:55.401920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.959 [2024-07-25 05:53:55.401939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.959 [2024-07-25 05:53:55.402179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.959 [2024-07-25 05:53:55.402437] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.959 [2024-07-25 05:53:55.402464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.959 [2024-07-25 05:53:55.402480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.959 [2024-07-25 05:53:55.406122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.959 [2024-07-25 05:53:55.415454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.959 [2024-07-25 05:53:55.415888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.959 [2024-07-25 05:53:55.415921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.959 [2024-07-25 05:53:55.415939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.959 [2024-07-25 05:53:55.416179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.959 [2024-07-25 05:53:55.416438] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.959 [2024-07-25 05:53:55.416464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.959 [2024-07-25 05:53:55.416481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.959 [2024-07-25 05:53:55.420069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.959 [2024-07-25 05:53:55.429398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.959 [2024-07-25 05:53:55.429955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.959 [2024-07-25 05:53:55.430024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.959 [2024-07-25 05:53:55.430049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.959 [2024-07-25 05:53:55.430302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.959 [2024-07-25 05:53:55.430547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.959 [2024-07-25 05:53:55.430572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.959 [2024-07-25 05:53:55.430588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.959 [2024-07-25 05:53:55.434174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.959 [2024-07-25 05:53:55.443287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.959 [2024-07-25 05:53:55.443716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.959 [2024-07-25 05:53:55.443749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.959 [2024-07-25 05:53:55.443767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.959 [2024-07-25 05:53:55.444007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.959 [2024-07-25 05:53:55.444266] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.959 [2024-07-25 05:53:55.444291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.959 [2024-07-25 05:53:55.444308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.959 [2024-07-25 05:53:55.447891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.959 [2024-07-25 05:53:55.457205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.959 [2024-07-25 05:53:55.457779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.959 [2024-07-25 05:53:55.457841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.959 [2024-07-25 05:53:55.457859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.959 [2024-07-25 05:53:55.458099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.959 [2024-07-25 05:53:55.458355] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.959 [2024-07-25 05:53:55.458380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.959 [2024-07-25 05:53:55.458397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.959 [2024-07-25 05:53:55.461982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.959 [2024-07-25 05:53:55.471121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.959 [2024-07-25 05:53:55.471637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.959 [2024-07-25 05:53:55.471690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.959 [2024-07-25 05:53:55.471708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.959 [2024-07-25 05:53:55.471948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.959 [2024-07-25 05:53:55.472191] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.959 [2024-07-25 05:53:55.472222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.959 [2024-07-25 05:53:55.472240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.959 [2024-07-25 05:53:55.475841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.959 [2024-07-25 05:53:55.485197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.959 [2024-07-25 05:53:55.485657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.959 [2024-07-25 05:53:55.485690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.959 [2024-07-25 05:53:55.485709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.959 [2024-07-25 05:53:55.485949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.959 [2024-07-25 05:53:55.486193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.959 [2024-07-25 05:53:55.486218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.959 [2024-07-25 05:53:55.486235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.959 [2024-07-25 05:53:55.489841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.959 [2024-07-25 05:53:55.499169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.959 [2024-07-25 05:53:55.499586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.959 [2024-07-25 05:53:55.499619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.959 [2024-07-25 05:53:55.499637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.959 [2024-07-25 05:53:55.499877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.959 [2024-07-25 05:53:55.500121] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.959 [2024-07-25 05:53:55.500145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.959 [2024-07-25 05:53:55.500161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.959 [2024-07-25 05:53:55.503756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.959 [2024-07-25 05:53:55.513064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.959 [2024-07-25 05:53:55.513486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.959 [2024-07-25 05:53:55.513518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.959 [2024-07-25 05:53:55.513537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.959 [2024-07-25 05:53:55.513776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.959 [2024-07-25 05:53:55.514020] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.959 [2024-07-25 05:53:55.514044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.959 [2024-07-25 05:53:55.514061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.959 [2024-07-25 05:53:55.517656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.959 [2024-07-25 05:53:55.526970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.959 [2024-07-25 05:53:55.527506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.959 [2024-07-25 05:53:55.527548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.959 [2024-07-25 05:53:55.527566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.960 [2024-07-25 05:53:55.527806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.960 [2024-07-25 05:53:55.528049] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.960 [2024-07-25 05:53:55.528074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.960 [2024-07-25 05:53:55.528091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.960 [2024-07-25 05:53:55.531695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.960 [2024-07-25 05:53:55.541031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.960 [2024-07-25 05:53:55.541479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.960 [2024-07-25 05:53:55.541511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.960 [2024-07-25 05:53:55.541530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.960 [2024-07-25 05:53:55.541769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.960 [2024-07-25 05:53:55.542013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.960 [2024-07-25 05:53:55.542038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.960 [2024-07-25 05:53:55.542054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.960 [2024-07-25 05:53:55.545676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.960 [2024-07-25 05:53:55.555006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.960 [2024-07-25 05:53:55.555440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.960 [2024-07-25 05:53:55.555472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.960 [2024-07-25 05:53:55.555491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.960 [2024-07-25 05:53:55.555743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.960 [2024-07-25 05:53:55.555988] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.960 [2024-07-25 05:53:55.556013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.960 [2024-07-25 05:53:55.556029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.960 [2024-07-25 05:53:55.559619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.960 [2024-07-25 05:53:55.568943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.960 [2024-07-25 05:53:55.569376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.960 [2024-07-25 05:53:55.569410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.960 [2024-07-25 05:53:55.569429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.960 [2024-07-25 05:53:55.569675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.960 [2024-07-25 05:53:55.569921] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.960 [2024-07-25 05:53:55.569946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.960 [2024-07-25 05:53:55.569962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.960 [2024-07-25 05:53:55.573561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.960 [2024-07-25 05:53:55.582872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.960 [2024-07-25 05:53:55.583275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.960 [2024-07-25 05:53:55.583309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.960 [2024-07-25 05:53:55.583328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.960 [2024-07-25 05:53:55.583569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.960 [2024-07-25 05:53:55.583814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.960 [2024-07-25 05:53:55.583840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.960 [2024-07-25 05:53:55.583857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.960 [2024-07-25 05:53:55.587458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.960 [2024-07-25 05:53:55.596784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.960 [2024-07-25 05:53:55.597223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.960 [2024-07-25 05:53:55.597265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.960 [2024-07-25 05:53:55.597285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.960 [2024-07-25 05:53:55.597536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.960 [2024-07-25 05:53:55.597781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.960 [2024-07-25 05:53:55.597807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.960 [2024-07-25 05:53:55.597823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.960 [2024-07-25 05:53:55.601422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.960 [2024-07-25 05:53:55.610748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.960 [2024-07-25 05:53:55.611159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.960 [2024-07-25 05:53:55.611191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.960 [2024-07-25 05:53:55.611209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.960 [2024-07-25 05:53:55.611461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.960 [2024-07-25 05:53:55.611706] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.960 [2024-07-25 05:53:55.611731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.960 [2024-07-25 05:53:55.611753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.960 [2024-07-25 05:53:55.615352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.960 [2024-07-25 05:53:55.624678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.960 [2024-07-25 05:53:55.625107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.960 [2024-07-25 05:53:55.625139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.960 [2024-07-25 05:53:55.625158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.960 [2024-07-25 05:53:55.625414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.960 [2024-07-25 05:53:55.625658] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.960 [2024-07-25 05:53:55.625683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.960 [2024-07-25 05:53:55.625699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.960 [2024-07-25 05:53:55.629296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.960 [2024-07-25 05:53:55.638619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.960 [2024-07-25 05:53:55.639043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.960 [2024-07-25 05:53:55.639076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.960 [2024-07-25 05:53:55.639094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.960 [2024-07-25 05:53:55.639345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.960 [2024-07-25 05:53:55.639590] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.960 [2024-07-25 05:53:55.639615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.960 [2024-07-25 05:53:55.639632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.960 [2024-07-25 05:53:55.643219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.960 [2024-07-25 05:53:55.652569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.960 [2024-07-25 05:53:55.652982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.960 [2024-07-25 05:53:55.653013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:01.960 [2024-07-25 05:53:55.653031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:01.960 [2024-07-25 05:53:55.653282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:01.960 [2024-07-25 05:53:55.653526] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.960 [2024-07-25 05:53:55.653550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.960 [2024-07-25 05:53:55.653566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.960 [2024-07-25 05:53:55.657212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.220 [2024-07-25 05:53:55.666548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.220 [2024-07-25 05:53:55.666946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.220 [2024-07-25 05:53:55.666987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.220 [2024-07-25 05:53:55.667007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.220 [2024-07-25 05:53:55.667261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.220 [2024-07-25 05:53:55.667510] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.220 [2024-07-25 05:53:55.667534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.220 [2024-07-25 05:53:55.667550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.220 [2024-07-25 05:53:55.671134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.220 [2024-07-25 05:53:55.680483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.220 [2024-07-25 05:53:55.680951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.220 [2024-07-25 05:53:55.680983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.220 [2024-07-25 05:53:55.681002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.220 [2024-07-25 05:53:55.681256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.220 [2024-07-25 05:53:55.681501] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.220 [2024-07-25 05:53:55.681524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.220 [2024-07-25 05:53:55.681540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.220 [2024-07-25 05:53:55.685122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.220 [2024-07-25 05:53:55.694449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.220 [2024-07-25 05:53:55.694858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.220 [2024-07-25 05:53:55.694889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.220 [2024-07-25 05:53:55.694907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.220 [2024-07-25 05:53:55.695147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.220 [2024-07-25 05:53:55.695402] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.220 [2024-07-25 05:53:55.695426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.220 [2024-07-25 05:53:55.695442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.220 [2024-07-25 05:53:55.699027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.220 [2024-07-25 05:53:55.708350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.220 [2024-07-25 05:53:55.708783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.220 [2024-07-25 05:53:55.708813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.220 [2024-07-25 05:53:55.708831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.220 [2024-07-25 05:53:55.709070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.220 [2024-07-25 05:53:55.709333] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.220 [2024-07-25 05:53:55.709358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.220 [2024-07-25 05:53:55.709374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.220 [2024-07-25 05:53:55.712960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.220 [2024-07-25 05:53:55.722276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.220 [2024-07-25 05:53:55.722700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.220 [2024-07-25 05:53:55.722747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.220 [2024-07-25 05:53:55.722765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.220 [2024-07-25 05:53:55.723005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.220 [2024-07-25 05:53:55.723261] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.220 [2024-07-25 05:53:55.723285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.220 [2024-07-25 05:53:55.723300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.220 [2024-07-25 05:53:55.726887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.220 [2024-07-25 05:53:55.736199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.220 [2024-07-25 05:53:55.736633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.220 [2024-07-25 05:53:55.736664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.220 [2024-07-25 05:53:55.736682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.220 [2024-07-25 05:53:55.736922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.220 [2024-07-25 05:53:55.737164] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.220 [2024-07-25 05:53:55.737187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.220 [2024-07-25 05:53:55.737202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.220 [2024-07-25 05:53:55.740828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.220 [2024-07-25 05:53:55.750151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.220 [2024-07-25 05:53:55.750567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.220 [2024-07-25 05:53:55.750599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.220 [2024-07-25 05:53:55.750616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.220 [2024-07-25 05:53:55.750857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.220 [2024-07-25 05:53:55.751099] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.220 [2024-07-25 05:53:55.751122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.220 [2024-07-25 05:53:55.751138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.220 [2024-07-25 05:53:55.754747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.220 [2024-07-25 05:53:55.764062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.220 [2024-07-25 05:53:55.764480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.220 [2024-07-25 05:53:55.764511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.220 [2024-07-25 05:53:55.764529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.220 [2024-07-25 05:53:55.764768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.220 [2024-07-25 05:53:55.765011] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.220 [2024-07-25 05:53:55.765034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.220 [2024-07-25 05:53:55.765049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.220 [2024-07-25 05:53:55.768662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.220 [2024-07-25 05:53:55.777977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.220 [2024-07-25 05:53:55.778491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.220 [2024-07-25 05:53:55.778541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.220 [2024-07-25 05:53:55.778559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.220 [2024-07-25 05:53:55.778798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.220 [2024-07-25 05:53:55.779041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.220 [2024-07-25 05:53:55.779065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.220 [2024-07-25 05:53:55.779080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.220 [2024-07-25 05:53:55.782678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.220 [2024-07-25 05:53:55.791990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.220 [2024-07-25 05:53:55.792426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.220 [2024-07-25 05:53:55.792457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.220 [2024-07-25 05:53:55.792475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.220 [2024-07-25 05:53:55.792714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.221 [2024-07-25 05:53:55.792956] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.221 [2024-07-25 05:53:55.792979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.221 [2024-07-25 05:53:55.792995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.221 [2024-07-25 05:53:55.796594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1777350 Killed "${NVMF_APP[@]}" "$@" 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.221 [2024-07-25 05:53:55.805907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.221 [2024-07-25 05:53:55.806287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-07-25 05:53:55.806319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-07-25 05:53:55.806336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.221 [2024-07-25 05:53:55.806575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.221 [2024-07-25 05:53:55.806818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.221 [2024-07-25 05:53:55.806841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.221 [2024-07-25 05:53:55.806856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1778306 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1778306 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1778306 ']' 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:02.221 05:53:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.221 [2024-07-25 05:53:55.810449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.221 [2024-07-25 05:53:55.819758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.221 [2024-07-25 05:53:55.820172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-07-25 05:53:55.820204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-07-25 05:53:55.820222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.221 [2024-07-25 05:53:55.820471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.221 [2024-07-25 05:53:55.820715] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.221 [2024-07-25 05:53:55.820739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.221 [2024-07-25 05:53:55.820754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.221 [2024-07-25 05:53:55.824345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.221 [2024-07-25 05:53:55.833652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.221 [2024-07-25 05:53:55.834070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-07-25 05:53:55.834101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-07-25 05:53:55.834124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.221 [2024-07-25 05:53:55.834378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.221 [2024-07-25 05:53:55.834633] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.221 [2024-07-25 05:53:55.834656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.221 [2024-07-25 05:53:55.834671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.221 [2024-07-25 05:53:55.838265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.221 [2024-07-25 05:53:55.847571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.221 [2024-07-25 05:53:55.847968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-07-25 05:53:55.847999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-07-25 05:53:55.848016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.221 [2024-07-25 05:53:55.848265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.221 [2024-07-25 05:53:55.848509] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.221 [2024-07-25 05:53:55.848532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.221 [2024-07-25 05:53:55.848548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.221 [2024-07-25 05:53:55.852134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.221 [2024-07-25 05:53:55.857617] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:34:02.221 [2024-07-25 05:53:55.857690] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:02.221 [2024-07-25 05:53:55.860949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.221 [2024-07-25 05:53:55.861312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-07-25 05:53:55.861340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-07-25 05:53:55.861355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.221 [2024-07-25 05:53:55.861564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.221 [2024-07-25 05:53:55.861779] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.221 [2024-07-25 05:53:55.861798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.221 [2024-07-25 05:53:55.861811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.221 [2024-07-25 05:53:55.864771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.221 [2024-07-25 05:53:55.874312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.221 [2024-07-25 05:53:55.874724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-07-25 05:53:55.874752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-07-25 05:53:55.874789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.221 [2024-07-25 05:53:55.875042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.221 [2024-07-25 05:53:55.875250] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.221 [2024-07-25 05:53:55.875269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.221 [2024-07-25 05:53:55.875281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.221 [2024-07-25 05:53:55.878417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.221 [2024-07-25 05:53:55.887502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.221 [2024-07-25 05:53:55.888006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-07-25 05:53:55.888049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-07-25 05:53:55.888065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.221 [2024-07-25 05:53:55.888311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.221 [2024-07-25 05:53:55.888511] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.221 [2024-07-25 05:53:55.888530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.221 [2024-07-25 05:53:55.888543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.221 [2024-07-25 05:53:55.891516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.221 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.221 [2024-07-25 05:53:55.901443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.221 [2024-07-25 05:53:55.901859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.221 [2024-07-25 05:53:55.901888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.221 [2024-07-25 05:53:55.901904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.221 [2024-07-25 05:53:55.902162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.221 [2024-07-25 05:53:55.902396] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.222 [2024-07-25 05:53:55.902417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.222 [2024-07-25 05:53:55.902431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.222 [2024-07-25 05:53:55.905993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.222 [2024-07-25 05:53:55.915376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.222 [2024-07-25 05:53:55.915789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.222 [2024-07-25 05:53:55.915821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.222 [2024-07-25 05:53:55.915839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.222 [2024-07-25 05:53:55.916091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.222 [2024-07-25 05:53:55.916333] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.222 [2024-07-25 05:53:55.916360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.222 [2024-07-25 05:53:55.916374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.222 [2024-07-25 05:53:55.920092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.480 [2024-07-25 05:53:55.926325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:02.480 [2024-07-25 05:53:55.929330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.480 [2024-07-25 05:53:55.929886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.480 [2024-07-25 05:53:55.929916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.481 [2024-07-25 05:53:55.929932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.481 [2024-07-25 05:53:55.930210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.481 [2024-07-25 05:53:55.930454] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.481 [2024-07-25 05:53:55.930475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.481 [2024-07-25 05:53:55.930488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.481 [2024-07-25 05:53:55.934001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.481 [2024-07-25 05:53:55.943164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.481 [2024-07-25 05:53:55.943764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.481 [2024-07-25 05:53:55.943807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.481 [2024-07-25 05:53:55.943828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.481 [2024-07-25 05:53:55.944088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.481 [2024-07-25 05:53:55.944301] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.481 [2024-07-25 05:53:55.944322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.481 [2024-07-25 05:53:55.944338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.481 [2024-07-25 05:53:55.947804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.481 [2024-07-25 05:53:55.957038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.481 [2024-07-25 05:53:55.957448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.481 [2024-07-25 05:53:55.957481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.481 [2024-07-25 05:53:55.957499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.481 [2024-07-25 05:53:55.957740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.481 [2024-07-25 05:53:55.957983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.481 [2024-07-25 05:53:55.958007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.481 [2024-07-25 05:53:55.958023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.481 [2024-07-25 05:53:55.961617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.481 [2024-07-25 05:53:55.970949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.481 [2024-07-25 05:53:55.971417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.481 [2024-07-25 05:53:55.971450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.481 [2024-07-25 05:53:55.971468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.481 [2024-07-25 05:53:55.971708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.481 [2024-07-25 05:53:55.971952] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.481 [2024-07-25 05:53:55.971976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.481 [2024-07-25 05:53:55.971992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.481 [2024-07-25 05:53:55.975586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.481 [2024-07-25 05:53:55.984910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.481 [2024-07-25 05:53:55.985546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.481 [2024-07-25 05:53:55.985591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.481 [2024-07-25 05:53:55.985613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.481 [2024-07-25 05:53:55.985864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.481 [2024-07-25 05:53:55.986112] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.481 [2024-07-25 05:53:55.986136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.481 [2024-07-25 05:53:55.986154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.481 [2024-07-25 05:53:55.989757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.481 [2024-07-25 05:53:55.998856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.481 [2024-07-25 05:53:55.999282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.481 [2024-07-25 05:53:55.999315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.481 [2024-07-25 05:53:55.999333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.481 [2024-07-25 05:53:55.999574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.481 [2024-07-25 05:53:55.999818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.481 [2024-07-25 05:53:55.999841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.481 [2024-07-25 05:53:55.999859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.481 [2024-07-25 05:53:56.003449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.481 [2024-07-25 05:53:56.012745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.481 [2024-07-25 05:53:56.013192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.481 [2024-07-25 05:53:56.013224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.481 [2024-07-25 05:53:56.013267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.481 [2024-07-25 05:53:56.013510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.481 [2024-07-25 05:53:56.013754] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.481 [2024-07-25 05:53:56.013778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.481 [2024-07-25 05:53:56.013794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.481 [2024-07-25 05:53:56.017382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.481 [2024-07-25 05:53:56.020665] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:02.481 [2024-07-25 05:53:56.020702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:02.481 [2024-07-25 05:53:56.020718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:02.481 [2024-07-25 05:53:56.020731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:02.481 [2024-07-25 05:53:56.020743] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:02.481 [2024-07-25 05:53:56.021191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:02.481 [2024-07-25 05:53:56.021260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:02.481 [2024-07-25 05:53:56.021265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.481 [2024-07-25 05:53:56.026725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.481 [2024-07-25 05:53:56.027285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.481 [2024-07-25 05:53:56.027322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.481 [2024-07-25 05:53:56.027342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.481 [2024-07-25 05:53:56.027594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.481 [2024-07-25 05:53:56.027841] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.481 [2024-07-25 05:53:56.027866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.481 [2024-07-25 05:53:56.027884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.481 [2024-07-25 05:53:56.031504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.481 [2024-07-25 05:53:56.040666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.481 [2024-07-25 05:53:56.041261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.481 [2024-07-25 05:53:56.041304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.481 [2024-07-25 05:53:56.041325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.481 [2024-07-25 05:53:56.041576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.481 [2024-07-25 05:53:56.041825] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.481 [2024-07-25 05:53:56.041849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.481 [2024-07-25 05:53:56.041868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.481 [2024-07-25 05:53:56.045509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.481 [2024-07-25 05:53:56.054682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.481 [2024-07-25 05:53:56.055333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.481 [2024-07-25 05:53:56.055378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.481 [2024-07-25 05:53:56.055401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.481 [2024-07-25 05:53:56.055661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.482 [2024-07-25 05:53:56.055910] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.482 [2024-07-25 05:53:56.055934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.482 [2024-07-25 05:53:56.055959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.482 [2024-07-25 05:53:56.059556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.482 [2024-07-25 05:53:56.068728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.482 [2024-07-25 05:53:56.069319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.482 [2024-07-25 05:53:56.069363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.482 [2024-07-25 05:53:56.069386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.482 [2024-07-25 05:53:56.069637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.482 [2024-07-25 05:53:56.069896] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.482 [2024-07-25 05:53:56.069921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.482 [2024-07-25 05:53:56.069940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.482 [2024-07-25 05:53:56.073550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.482 [2024-07-25 05:53:56.082672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.482 [2024-07-25 05:53:56.083227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.482 [2024-07-25 05:53:56.083284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.482 [2024-07-25 05:53:56.083305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.482 [2024-07-25 05:53:56.083552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.482 [2024-07-25 05:53:56.083800] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.482 [2024-07-25 05:53:56.083825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.482 [2024-07-25 05:53:56.083842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.482 [2024-07-25 05:53:56.087469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.482 [2024-07-25 05:53:56.096598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.482 [2024-07-25 05:53:56.097163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.482 [2024-07-25 05:53:56.097206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.482 [2024-07-25 05:53:56.097260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.482 [2024-07-25 05:53:56.097513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.482 [2024-07-25 05:53:56.097773] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.482 [2024-07-25 05:53:56.097808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.482 [2024-07-25 05:53:56.097826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.482 [2024-07-25 05:53:56.101436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.482 [2024-07-25 05:53:56.110548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.482 [2024-07-25 05:53:56.111039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.482 [2024-07-25 05:53:56.111072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.482 [2024-07-25 05:53:56.111091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.482 [2024-07-25 05:53:56.111344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.482 [2024-07-25 05:53:56.111589] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.482 [2024-07-25 05:53:56.111613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.482 [2024-07-25 05:53:56.111630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.482 [2024-07-25 05:53:56.115215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.482 [2024-07-25 05:53:56.124437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.482 [2024-07-25 05:53:56.124817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.482 [2024-07-25 05:53:56.124845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.482 [2024-07-25 05:53:56.124871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.482 [2024-07-25 05:53:56.125087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.482 [2024-07-25 05:53:56.125318] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.482 [2024-07-25 05:53:56.125339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.482 [2024-07-25 05:53:56.125353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.482 [2024-07-25 05:53:56.128636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.482 [2024-07-25 05:53:56.138115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.482 [2024-07-25 05:53:56.138539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.482 [2024-07-25 05:53:56.138579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.482 [2024-07-25 05:53:56.138596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.482 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:02.482 [2024-07-25 05:53:56.138812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.482 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:34:02.482 [2024-07-25 05:53:56.139035] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.482 [2024-07-25 05:53:56.139056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.482 [2024-07-25 05:53:56.139070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.482 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:02.482 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:02.482 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.482 [2024-07-25 05:53:56.142381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.482 [2024-07-25 05:53:56.151840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.482 [2024-07-25 05:53:56.152232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.482 [2024-07-25 05:53:56.152266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.482 [2024-07-25 05:53:56.152283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.482 [2024-07-25 05:53:56.152499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.482 [2024-07-25 05:53:56.152729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.482 [2024-07-25 05:53:56.152751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.482 [2024-07-25 05:53:56.152765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.482 [2024-07-25 05:53:56.156068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.482 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.482 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:02.482 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.482 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.482 [2024-07-25 05:53:56.161522] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.482 [2024-07-25 05:53:56.165485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.482 [2024-07-25 05:53:56.165952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.482 [2024-07-25 05:53:56.165979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.482 [2024-07-25 05:53:56.165995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.482 [2024-07-25 05:53:56.166239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.482 [2024-07-25 05:53:56.166476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.482 [2024-07-25 05:53:56.166498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.482 [2024-07-25 05:53:56.166512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.482 [2024-07-25 05:53:56.169761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.482 [2024-07-25 05:53:56.179204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.482 [2024-07-25 05:53:56.179727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.482 [2024-07-25 05:53:56.179756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.482 [2024-07-25 05:53:56.179778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.482 [2024-07-25 05:53:56.180045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.482 [2024-07-25 05:53:56.180321] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.483 [2024-07-25 05:53:56.180344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.483 [2024-07-25 05:53:56.180358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.740 [2024-07-25 05:53:56.183806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.740 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.740 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:02.740 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.740 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.740 [2024-07-25 05:53:56.192801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.740 [2024-07-25 05:53:56.193250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.740 [2024-07-25 05:53:56.193283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.740 [2024-07-25 05:53:56.193301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.740 [2024-07-25 05:53:56.193532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.740 [2024-07-25 05:53:56.193764] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.740 [2024-07-25 05:53:56.193785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.740 [2024-07-25 05:53:56.193800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.740 [2024-07-25 05:53:56.197041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.740 [2024-07-25 05:53:56.206331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.740 [2024-07-25 05:53:56.206876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.740 [2024-07-25 05:53:56.206912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.740 [2024-07-25 05:53:56.206931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.740 Malloc0 00:34:02.740 [2024-07-25 05:53:56.207157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.741 [2024-07-25 05:53:56.207390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.741 [2024-07-25 05:53:56.207413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.741 [2024-07-25 05:53:56.207430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.741 [2024-07-25 05:53:56.210765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.741 [2024-07-25 05:53:56.220003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.741 [2024-07-25 05:53:56.220411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.741 [2024-07-25 05:53:56.220439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc4f70 with addr=10.0.0.2, port=4420 00:34:02.741 [2024-07-25 05:53:56.220455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc4f70 is same with the state(5) to be set 00:34:02.741 [2024-07-25 05:53:56.220686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc4f70 (9): Bad file descriptor 00:34:02.741 [2024-07-25 05:53:56.220898] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.741 [2024-07-25 05:53:56.220918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.741 [2024-07-25 05:53:56.220931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.741 [2024-07-25 05:53:56.224246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.741 [2024-07-25 05:53:56.226950] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.741 05:53:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1777637 00:34:02.741 [2024-07-25 05:53:56.233824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.741 [2024-07-25 05:53:56.265273] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:12.704 00:34:12.704 Latency(us) 00:34:12.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.704 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:12.704 Verification LBA range: start 0x0 length 0x4000 00:34:12.704 Nvme1n1 : 15.00 6625.76 25.88 8960.15 0.00 8188.57 831.34 23690.05 00:34:12.704 =================================================================================================================== 00:34:12.704 Total : 6625.76 25.88 8960.15 0.00 8188.57 831.34 23690.05 00:34:12.704 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:12.704 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.704 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.704 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.704 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.704 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:12.704 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:12.704 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:12.704 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:12.705 rmmod nvme_tcp 00:34:12.705 rmmod nvme_fabrics 00:34:12.705 rmmod nvme_keyring 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1778306 ']' 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1778306 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1778306 ']' 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1778306 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1778306 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1778306' 00:34:12.705 killing process with pid 1778306 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1778306 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1778306 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.705 05:54:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.606 05:54:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:14.606 00:34:14.606 real 0m22.185s 00:34:14.606 user 0m58.567s 00:34:14.606 sys 0m4.515s 00:34:14.606 05:54:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:14.606 05:54:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.606 ************************************ 00:34:14.606 END TEST nvmf_bdevperf 00:34:14.606 ************************************ 00:34:14.606 05:54:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:14.606 05:54:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:14.606 05:54:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:14.606 05:54:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.606 ************************************ 00:34:14.606 START TEST nvmf_target_disconnect 00:34:14.606 ************************************ 00:34:14.606 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:14.606 * Looking for test storage... 00:34:14.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:14.606 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.606 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:14.606 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.606 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.606 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:14.607 05:54:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:16.509 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:16.510 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:16.510 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:16.510 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:16.510 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:16.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:34:16.510 00:34:16.510 --- 10.0.0.2 ping statistics --- 00:34:16.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.510 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:34:16.510 00:34:16.510 --- 10.0.0.1 ping statistics --- 00:34:16.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.510 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:16.510 ************************************ 00:34:16.510 START TEST nvmf_target_disconnect_tc1 00:34:16.510 ************************************ 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:16.510 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:16.511 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.511 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.511 [2024-07-25 05:54:09.980616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.511 [2024-07-25 05:54:09.980680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x89e590 with addr=10.0.0.2, port=4420 00:34:16.511 [2024-07-25 05:54:09.980714] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:16.511 [2024-07-25 05:54:09.980738] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:16.511 [2024-07-25 05:54:09.980766] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:16.511 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:16.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:16.511 Initializing NVMe Controllers 00:34:16.511 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:34:16.511 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:16.511 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:16.511 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:16.511 00:34:16.511 real 0m0.091s 00:34:16.511 user 0m0.045s 00:34:16.511 sys 0m0.046s 00:34:16.511 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:16.511 05:54:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:16.511 ************************************ 00:34:16.511 END TEST nvmf_target_disconnect_tc1 00:34:16.511 ************************************ 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:16.511 ************************************ 00:34:16.511 START TEST nvmf_target_disconnect_tc2 00:34:16.511 ************************************ 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1782062 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1782062 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1782062 ']' 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:16.511 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.511 [2024-07-25 05:54:10.098911] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:34:16.511 [2024-07-25 05:54:10.098999] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.511 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.511 [2024-07-25 05:54:10.163500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:16.769 [2024-07-25 05:54:10.250741] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.769 [2024-07-25 05:54:10.250792] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.769 [2024-07-25 05:54:10.250821] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.769 [2024-07-25 05:54:10.250833] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.769 [2024-07-25 05:54:10.250843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.769 [2024-07-25 05:54:10.251187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:16.769 [2024-07-25 05:54:10.251264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:16.769 [2024-07-25 05:54:10.251313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:16.769 [2024-07-25 05:54:10.251316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.769 Malloc0 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.769 [2024-07-25 05:54:10.439027] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.769 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.769 [2024-07-25 05:54:10.467331] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.027 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.027 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:17.027 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.027 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.027 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.027 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1782084 00:34:17.027 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:17.027 05:54:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:17.027 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.936 05:54:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1782062 00:34:18.936 05:54:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.936 Read completed with error (sct=0, sc=8) 00:34:18.936 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 [2024-07-25 05:54:12.493767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 [2024-07-25 05:54:12.494119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 [2024-07-25 05:54:12.494425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Write completed with error (sct=0, sc=8) 00:34:18.937 starting I/O failed 00:34:18.937 Read completed with error (sct=0, sc=8) 00:34:18.938 starting I/O failed 00:34:18.938 Write completed with error (sct=0, sc=8) 00:34:18.938 starting I/O failed 00:34:18.938 Read completed with error (sct=0, sc=8) 00:34:18.938 starting I/O failed 00:34:18.938 [2024-07-25 05:54:12.494745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.938 [2024-07-25 05:54:12.495056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.495097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.495341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.495369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.495533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.495560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.495718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.495759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.495923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.495951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.496142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.496167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.496309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.496335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.496520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.496546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.496729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.496758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.497017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.497058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.497258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.497284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.497404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.497429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.497577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.497603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.497751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.497777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.497907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.497949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.498123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.498159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.498339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.498366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.498495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.498521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.498670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.498695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.498843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.498868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.499056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.499112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.499290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.499317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.499505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.499531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.499660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.499686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.499845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.499888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.500152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.500203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.500376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.500402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.500536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.500563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.500751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.500777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.500958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.501011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.501213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.501240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.501383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.501408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.501532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.501557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.501680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.501705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.501953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.502005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.502186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.938 [2024-07-25 05:54:12.502216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.938 qpair failed and we were unable to recover it. 00:34:18.938 [2024-07-25 05:54:12.502373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.502399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.502556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.502583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.502744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.502771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.502930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.502971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.503139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.503181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.503333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.503367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.503526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.503552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.503724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.503749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.503897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.503922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.504073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.504098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.504225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.504255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.504393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.504420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.504556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.504581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.504740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.504767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.504965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.504990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.505139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.505164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.505305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.505345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.505502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.505531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.505713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.505756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.505899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.505943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.506097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.506142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.506354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.506382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.506529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.506555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.506713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.506739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.506913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.506938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.507067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.507093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.507261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.507306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.507462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.507490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.507658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.507701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.507919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.507968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.508117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.508143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.508291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.508317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.508476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.508501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.508659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.508686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.508817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.508842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.508993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.509021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.509173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.509198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.939 [2024-07-25 05:54:12.509392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.939 [2024-07-25 05:54:12.509418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.939 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.509541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.509568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.509694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.509719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.509850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.509876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.510125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.510152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.510312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.510339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.510490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.510515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.510661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.510705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.510874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.510918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.511040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.511065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.511220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.511251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.511418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.511443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.511562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.511587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.511768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.511793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.512054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.512106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.512260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.512286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.512464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.512492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.512621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.512648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.512774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.512801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.512974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.513000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.513152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.513193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.513379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.513405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.513529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.513571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.513734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.513763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.513901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.513929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.514058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.514086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.514220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.514254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.514400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.514426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.514562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.514591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.514764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.514792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.514980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.515008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.515198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.515223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.515366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.515392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.515516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.515542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.515714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.515755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.515920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.515949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.516117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.516144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.516306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.516347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.940 [2024-07-25 05:54:12.516498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.940 [2024-07-25 05:54:12.516524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.940 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.516646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.516672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.516820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.516845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.517029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.517054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.517184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.517209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.517355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.517394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.517534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.517563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.517713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.517739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.517918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.517946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.518165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.518210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.518346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.518372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.518549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.518574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.518700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.518725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.518880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.518906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.519111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.519139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.519272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.519299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.519474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.519500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.519707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.519760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.519963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.519988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.520148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.520173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.520354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.520380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.520551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.520577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.520747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.520772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.520912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.520953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.521123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.521151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.521327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.521353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.521469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.521495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.521671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.521697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.521867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.521892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.522023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.522048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.522194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.941 [2024-07-25 05:54:12.522220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.941 qpair failed and we were unable to recover it. 00:34:18.941 [2024-07-25 05:54:12.522372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.522398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.522549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.522578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.522718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.522746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.522934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.522962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.523156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.523185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.523330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.523356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.523509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.523534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.523719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.523747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.523895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.523920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.524090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.524115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.524284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.524323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.524459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.524487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.524699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.524743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.524959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.525002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.525154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.525180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.525337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.525364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.525516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.525543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.525703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.525728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.525910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.525936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.526064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.526089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.526237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.526268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.526449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.526475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.526598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.526624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.526777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.526803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.526990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.527015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.527139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.527165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.527291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.527317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.527439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.527465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.527606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.527636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.527801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.527829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.527963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.527991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.528183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.528209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.528359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.528384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.528554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.528584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.528761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.528813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.528995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.529023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.529168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.529197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.529400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.529426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.942 qpair failed and we were unable to recover it. 00:34:18.942 [2024-07-25 05:54:12.529552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.942 [2024-07-25 05:54:12.529577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.529750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.529793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.529974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.530043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.530219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.530255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.530410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.530436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.530561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.530586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.530752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.530781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.530942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.530970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.531162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.531203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.531360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.531386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.531533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.531559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.531713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.531754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.531924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.531964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.532155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.532183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.532326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.532352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.532504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.532546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.532681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.532710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.532901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.532931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.533087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.533112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.533233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.533265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.533415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.533441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.533585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.533610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.533779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.533807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.534076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.534101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.534223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.534256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.534403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.534429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.534553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.534579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.534702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.534744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.534881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.534909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.535075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.535102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.535266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.535291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.535457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.535496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.535656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.535684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.535805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.535847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.536013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.536042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.536218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.536250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.536426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.536452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.536568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.943 [2024-07-25 05:54:12.536594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.943 qpair failed and we were unable to recover it. 00:34:18.943 [2024-07-25 05:54:12.536775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.536801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.536927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.536952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.537071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.537098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.537223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.537254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.537408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.537433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.537577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.537602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.537748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.537773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.537903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.537928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.538077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.538102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.538229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.538261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.538393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.538418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.538544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.538569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.538721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.538746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.538881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.538906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.539085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.539110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.539257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.539283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.539407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.539432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.539582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.539608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.539732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.539758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.539879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.539904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.540086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.540114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.540280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.540307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.540456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.540481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.540637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.540661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.540774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.540799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.540942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.540966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.541114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.541139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.541322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.541348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.541476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.541500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.541659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.541684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.541801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.541827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.541971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.541996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.542115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.542140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.542262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.542288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.542445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.542469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.542625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.542668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.542841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.542867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.543019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.944 [2024-07-25 05:54:12.543062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.944 qpair failed and we were unable to recover it. 00:34:18.944 [2024-07-25 05:54:12.543235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.543276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.543468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.543493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.543683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.543711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.543909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.543936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.544082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.544108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.544260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.544285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.544411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.544436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.544610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.544635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.544805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.544834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.544990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.545021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.545168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.545194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.545343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.545388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.545588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.545613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.545768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.545794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.545947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.545973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.546167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.546195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.546374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.546399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.546545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.546570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.546742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.546771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.546909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.546935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.547064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.547105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.547279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.547305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.547478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.547504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.547659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.547702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.547867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.547894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.548056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.548084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.548252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.548281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.548446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.548471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.548628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.548652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.548798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.548840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.549027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.549056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.549194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.549219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.549394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.549422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.549580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.549608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.549779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.549804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.549980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.550005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.550157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.550182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.550311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.550337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.945 qpair failed and we were unable to recover it. 00:34:18.945 [2024-07-25 05:54:12.550480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.945 [2024-07-25 05:54:12.550505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.550630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.550655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.550806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.550832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.550998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.551027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.551194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.551222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.551376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.551401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.551595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.551623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.551757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.551786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.551952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.551978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.552133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.552158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.552294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.552337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.552512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.552538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.552692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.552717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.552855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.552880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.553009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.553035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.553154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.553178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.553311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.553337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.553534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.553559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.553734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.553758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.553876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.553902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.554049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.554074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.554227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.554259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.554421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.554446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.554660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.554685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.554817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.554842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.554959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.554985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.555162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.555203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.555415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.555442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.555565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.555590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.555711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.555738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.555885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.555927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.556105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.556131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.946 [2024-07-25 05:54:12.556277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.946 [2024-07-25 05:54:12.556302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.946 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.556481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.556510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.556645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.556673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.556845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.556870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.557052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.557076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.557223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.557260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.557439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.557464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.557636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.557665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.557813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.557838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.558014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.558040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.558188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.558229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.558373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.558401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.558546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.558572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.558743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.558783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.558959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.558984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.559137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.559162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.559289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.559315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.559467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.559493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.559641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.559667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.559796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.559822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.559977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.560002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.560136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.560161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.560292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.560317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.560468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.560509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.560691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.560717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.560863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.560889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.561034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.561059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.561229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.561263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.561404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.561429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.561599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.561626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.561811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.561836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.561988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.562030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.562193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.562223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.562434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.562459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.562588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.562613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.562768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.562794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.562939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.562965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.563122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.563147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.947 [2024-07-25 05:54:12.563268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.947 [2024-07-25 05:54:12.563294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.947 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.563444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.563470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.563665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.563693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.563890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.563915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.564068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.564094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.564285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.564314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.564459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.564487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.564630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.564654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.564811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.564836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.564984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.565026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.565195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.565223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.565358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.565401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.565599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.565624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.565776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.565802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.565948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.565972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.566177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.566206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.566392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.566418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.566580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.566608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.566769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.566797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.566991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.567017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.567170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.567194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.567323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.567349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.567525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.567550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.567731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.567756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.567890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.567914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.568065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.568090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.568240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.568272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.568443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.568467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.568605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.568631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.568788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.568813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.568934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.568975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.569125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.569150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.569303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.569329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.569485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.569512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.569678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.569702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.569864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.569892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.570047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.570072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.570191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.570219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.570404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.948 [2024-07-25 05:54:12.570430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.948 qpair failed and we were unable to recover it. 00:34:18.948 [2024-07-25 05:54:12.570622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.570651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.570817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.570843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.570972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.570997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.571145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.571170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.571315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.571340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.571471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.571496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.571626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.571651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.571793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.571817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.571964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.571989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.572138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.572165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.572324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.572350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.572506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.572531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.572654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.572680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.572864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.572889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.573000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.573026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.573198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.573224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.573395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.573420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.573595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.573636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.573804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.573834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.574009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.574035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.574159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.574184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.574317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.574343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.574498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.574523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.574702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.574727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.574876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.574902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.575050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.575075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.575226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.575257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.575433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.575475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.575646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.575673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.575819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.575844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.576026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.576069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.576238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.576268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.576424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.576449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.576577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.576602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.576733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.576758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.576900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.576941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.577123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.577148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.577274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.577300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.949 qpair failed and we were unable to recover it. 00:34:18.949 [2024-07-25 05:54:12.577476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.949 [2024-07-25 05:54:12.577501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.577625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.577654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.577774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.577800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.577949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.577974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.578121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.578146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.578271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.578298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.578450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.578476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.578595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.578621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.578769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.578795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.578922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.578948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.579099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.579127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.579309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.579335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.579506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.579532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.579679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.579708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.579886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.579912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.580035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.580060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.580250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.580293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.580461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.580486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.580642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.580670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.580826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.580854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.581000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.581026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.581179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.581205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.581345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.581371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.581520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.581545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.581671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.581696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.581815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.581840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.582019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.582044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.582216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.582252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.582449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.582482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.582600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.582626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.582752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.582778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.582972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.582997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.950 [2024-07-25 05:54:12.583126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.950 [2024-07-25 05:54:12.583152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.950 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.583289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.583333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.583507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.583536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.583687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.583712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.583858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.583883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.584041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.584069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.584235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.584266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.584387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.584428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.584571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.584596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.584766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.584792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.584916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.584957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.585122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.585150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.585317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.585343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.585520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.585545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.585690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.585715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.585882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.585907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.586058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.586084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.586238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.586287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.586460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.586485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.586678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.586707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.586871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.586900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.587054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.587082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.587287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.587313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.587455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.587481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.587641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.587666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.587817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.587841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.588038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.588066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.588263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.588289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.588423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.588451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.588583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.588610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.588805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.588831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.588961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.588989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.589130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.589159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.589333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.589359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.589503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.589529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.589678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.589703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.589814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.589840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.589991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.590021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.590171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.951 [2024-07-25 05:54:12.590196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.951 qpair failed and we were unable to recover it. 00:34:18.951 [2024-07-25 05:54:12.590351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.590378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.590525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.590551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.590672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.590697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.590856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.590881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.591006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.591032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.591185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.591210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.591364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.591390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.591571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.591599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.591761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.591789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.591984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.592008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.592152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.592180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.592331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.592357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.592508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.592534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.592703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.592730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.592878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.592920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.593096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.593121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.593292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.593320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.593483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.593511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.593655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.593680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.593804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.593828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.593973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.594002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.594191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.594219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.594399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.594425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.594577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.594602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.594758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.594783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.594902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.594929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.595108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.595133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.595325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.595350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.595493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.595521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.595684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.595712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.595910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.595935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.596086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.596110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.596257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.596299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.596491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.596517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.596648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.596673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.596828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.596853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.597000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.597026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.597178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.952 [2024-07-25 05:54:12.597204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.952 qpair failed and we were unable to recover it. 00:34:18.952 [2024-07-25 05:54:12.597362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.597388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.597558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.597597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.597737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.597765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.597957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.597984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.598111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.598139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.598293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.598319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.598496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.598522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.598693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.598719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.598899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.598925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.599050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.599076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.599229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.599261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.599384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.599411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.599555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.599582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.599753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.599782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.599934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.599960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.600110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.600135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.600286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.600311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.600431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.600458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.600584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.600608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.600816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.600841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.600987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.601013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.601168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.601193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.601338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.601364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.601486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.601511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.601663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.601689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.601837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.601863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.602032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.602060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.602234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.602268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.602453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.602492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.602675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.602719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.602924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.602967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.603120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.603146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.603292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.603318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.603503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.603529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.603698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.603723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.603851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.603877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.603995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.604021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.604198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.953 [2024-07-25 05:54:12.604224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.953 qpair failed and we were unable to recover it. 00:34:18.953 [2024-07-25 05:54:12.604403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.604429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.604571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.604613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.604777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.604820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.604990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.605043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.605192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.605218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.605415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.605446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.605614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.605643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.605809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.605837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.606054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.606110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.606263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.606290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.606446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.606471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.606694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.606748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.606937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.606965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.607132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.607157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.607307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.607336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.607484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.607509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.607676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.607719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.607906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.607967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.608112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.608138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.608331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.608379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.608531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.608557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.608788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.608814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.608982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.609025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.609149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.609176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.609325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.609351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.609525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.609550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.609745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.609771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.609891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.609917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.610074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.610103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.610269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.610295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.610421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.610450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.610621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.610650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.610793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.610838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.611015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.611040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.611214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.611239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.611393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.611419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.611570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.611595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.611756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.954 [2024-07-25 05:54:12.611781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.954 qpair failed and we were unable to recover it. 00:34:18.954 [2024-07-25 05:54:12.611953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.611979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.612155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.612180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.612334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.612360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.612484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.612509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.612659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.612685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.612832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.612858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.613026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.613056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.613200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.613225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.613420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.613446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.613590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.613616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.613820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.613848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.613981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.614010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.614148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.614174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.614288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.614314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.614436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.614463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.614626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.614652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.614777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.614803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.614954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.614980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.615110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.615151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.615308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.615334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.615491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.615532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.615676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.615701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.615888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.615916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.616046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.616073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.616253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.616279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.616403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.616428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.616580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.616606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.616728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.616754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.616899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.616924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.617073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.617098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.617264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.617290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.617436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.617462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.617610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.955 [2024-07-25 05:54:12.617636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.955 qpair failed and we were unable to recover it. 00:34:18.955 [2024-07-25 05:54:12.617797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.617841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.618007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.618034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.618172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.618200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.618373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.618399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.618551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.618576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.618755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.618781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.618931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.618956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.619083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.619108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.619278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.619322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.619490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.619516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.619666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.619691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.619814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.619839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.619969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.619995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.620120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.620145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.620305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.620340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.620496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.620521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.620650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.620689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.620853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.620881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.621070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.621098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.621249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.621274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.621433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.621457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.621630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.621657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.621823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.621851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.622007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.622034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.622229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.622270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.622401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.622426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.622622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.622647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.622772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.622796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.622922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.622948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.623148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.623177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.623353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.623379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.623528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.623552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.623697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.623723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.623879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.623904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.624018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.624043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.624171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.624196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.624415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.624451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.624631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.956 [2024-07-25 05:54:12.624674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.956 qpair failed and we were unable to recover it. 00:34:18.956 [2024-07-25 05:54:12.624881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.624908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.625062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.625090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.625265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.625295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.625487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.625530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.625738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.625764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.625993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.626055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.626227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.626263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.626394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.626420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.626564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.626589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.626713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.626738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.626859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.626885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.627082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.627110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.627298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.627328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.627495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.627520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.627721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.627749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.627899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.627926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.628051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.957 [2024-07-25 05:54:12.628082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:18.957 qpair failed and we were unable to recover it. 00:34:18.957 [2024-07-25 05:54:12.628207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.240 [2024-07-25 05:54:12.628232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.240 qpair failed and we were unable to recover it. 00:34:19.240 [2024-07-25 05:54:12.628376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.240 [2024-07-25 05:54:12.628402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.240 qpair failed and we were unable to recover it. 00:34:19.240 [2024-07-25 05:54:12.628550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.240 [2024-07-25 05:54:12.628575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.240 qpair failed and we were unable to recover it. 00:34:19.240 [2024-07-25 05:54:12.628711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.240 [2024-07-25 05:54:12.628739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.240 qpair failed and we were unable to recover it. 00:34:19.240 [2024-07-25 05:54:12.628910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.240 [2024-07-25 05:54:12.628936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.240 qpair failed and we were unable to recover it. 00:34:19.240 [2024-07-25 05:54:12.629088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.240 [2024-07-25 05:54:12.629113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.240 qpair failed and we were unable to recover it. 00:34:19.240 [2024-07-25 05:54:12.629232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.240 [2024-07-25 05:54:12.629263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.240 qpair failed and we were unable to recover it. 00:34:19.240 [2024-07-25 05:54:12.629384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.240 [2024-07-25 05:54:12.629410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.240 qpair failed and we were unable to recover it. 00:34:19.240 [2024-07-25 05:54:12.629554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.240 [2024-07-25 05:54:12.629580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.240 qpair failed and we were unable to recover it. 00:34:19.240 [2024-07-25 05:54:12.629727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.240 [2024-07-25 05:54:12.629752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.240 qpair failed and we were unable to recover it. 00:34:19.240 [2024-07-25 05:54:12.629932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.240 [2024-07-25 05:54:12.629957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.240 qpair failed and we were unable to recover it. 00:34:19.240 [2024-07-25 05:54:12.630128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.240 [2024-07-25 05:54:12.630154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.240 qpair failed and we were unable to recover it. 00:34:19.240 [2024-07-25 05:54:12.630279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.630305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.630434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.630460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.630601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.630627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.630749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.630776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.630922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.630948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.631073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.631099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.631252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.631278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.631450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.631475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.631669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.631694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.631845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.631870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.632053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.632078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.632254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.632307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.632422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.632447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.632581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.632608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.632743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.632768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.632885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.632910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.633059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.633084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.633258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.633291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.633411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.633438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.633583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.633622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.633777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.633806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.633929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.633957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.634160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.634189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.634346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.634373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.634555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.634581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.634732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.634776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.634949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.634975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.635149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.635183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.635309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.635336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.635520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.241 [2024-07-25 05:54:12.635545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.241 qpair failed and we were unable to recover it. 00:34:19.241 [2024-07-25 05:54:12.635688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.635730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.635905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.635930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.636109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.636134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.636254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.636284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.636433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.636459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.636635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.636661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.636802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.636831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.637035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.637063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.637254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.637301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.637430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.637455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.637617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.637645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.637828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.637854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.637981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.638007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.638210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.638237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.638381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.638408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.638603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.638642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.638799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.638826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.638952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.638977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.639100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.639141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.639280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.639309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.639439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.639464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.639613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.639638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.639781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.639824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.639963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.639987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.640142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.640174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.640351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.640379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.640505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.640530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.242 [2024-07-25 05:54:12.640678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.242 [2024-07-25 05:54:12.640703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.242 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.640857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.640883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.641010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.641037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.641161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.641187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.641316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.641344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.641491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.641516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.641667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.641694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.641857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.641883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.642035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.642060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.642231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.642269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.642412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.642440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.642619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.642645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.642792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.642817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.642937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.642962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.643111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.643136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.643348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.643374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.643526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.643551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.643688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.643714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.643917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.643946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.644090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.644118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.644289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.644332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.644506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.644532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.644734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.644763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.644952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.644980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.645117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.645146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.645324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.645351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.645499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.645526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.645671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.645698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.645859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.645887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.646057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.646083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.646230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.646262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.646418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.243 [2024-07-25 05:54:12.646443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.243 qpair failed and we were unable to recover it. 00:34:19.243 [2024-07-25 05:54:12.646617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.646645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.646794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.646820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.646945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.646970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.647115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.647140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.647334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.647361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.647534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.647560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.647705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.647750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.647920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.647950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.648136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.648162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.648318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.648345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.648510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.648536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.648685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.648711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.648841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.648866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.648989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.649014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.649158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.649184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.649305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.649332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.649479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.649504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.649676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.649701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.649822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.649848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.649974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.649999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.650191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.650220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.650396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.650422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.650579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.650604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.650812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.650838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.650966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.650995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.651191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.651217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.651382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.651408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.651572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.651598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.651746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.651772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.651946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.651972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.652116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.652159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.652334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.652361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.652513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.652538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.652713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.652746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.652935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.652982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.653154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.653180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.653304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.653330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.653519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.244 [2024-07-25 05:54:12.653545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.244 qpair failed and we were unable to recover it. 00:34:19.244 [2024-07-25 05:54:12.653730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.653755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.653899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.653927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.654108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.654134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.654286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.654313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.654473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.654500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.654672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.654698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.654848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.654873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.655040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.655068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.655259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.655285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.655422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.655448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.655579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.655604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.655742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.655767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.655884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.655911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.656061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.656086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.656261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.656309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.656471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.656498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.656649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.656692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.656857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.656888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.657037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.657062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.657211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.657237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.657466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.657492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.657632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.657658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.657783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.657813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.657972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.658000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.658172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.658198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.658357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.658383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.658504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.658529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.658677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.658702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.658859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.658884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.659006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.659031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.659153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.659180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.245 qpair failed and we were unable to recover it. 00:34:19.245 [2024-07-25 05:54:12.659306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.245 [2024-07-25 05:54:12.659332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.659536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.659564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.659708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.659733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.659925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.659953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.660125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.660184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.660331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.660358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.660480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.660507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.660639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.660683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.660864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.660891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.661053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.661105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.661301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.661328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.661466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.661491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.661625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.661652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.661829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.661856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.662032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.662058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.662233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.662270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.662444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.662472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.662615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.662641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.662798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.662836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.663052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.663102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.663277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.663311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.663439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.663465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.663643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.663672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.663827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.663852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.663970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.663997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.664128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.664155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.664302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.664329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.664487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.664514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.664666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.664709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.664858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.664884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.665034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.665060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.665231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.665271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.665452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.665478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.665608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.665635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.665810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.665838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.665975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.666001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.666153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.246 [2024-07-25 05:54:12.666195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.246 qpair failed and we were unable to recover it. 00:34:19.246 [2024-07-25 05:54:12.666408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.666434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.666552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.666577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.666757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.666794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.667045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.667070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.667199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.667226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.667370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.667396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.667541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.667566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.667716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.667741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.667872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.667914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.668079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.668107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.668249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.668275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.668403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.668428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.668599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.668626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.668766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.668791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.668912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.668937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.669111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.669136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.669287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.669312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.669450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.669475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.669601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.669645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.669813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.669841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.669958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.669999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.670162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.670190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.670390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.670417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.670558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.670587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.670766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.670791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.670936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.670962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.671092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.671119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.671292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.671318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.671443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.671468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.671622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.671647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.671793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.671821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.671964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.671991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.672112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.672137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.672316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.672343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.672485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.672515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.672667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.672692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.672831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.247 [2024-07-25 05:54:12.672856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.247 qpair failed and we were unable to recover it. 00:34:19.247 [2024-07-25 05:54:12.673031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.673055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.673191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.673229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.673420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.673458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.673584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.673611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.673761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.673787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.673961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.673987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.674133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.674159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.674343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.674369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.674520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.674546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.674700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.674724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.674847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.674873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.675008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.675034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.675157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.675182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.675312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.675338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.675468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.675492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.675666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.675691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.675836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.675862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.676011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.676036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.676184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.676209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.676343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.676368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.676491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.676516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.676634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.676659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.676849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.676877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.677038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.677066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.677227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.677270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.677446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.677473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.677644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.677672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.677812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.677837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.677985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.678010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.678193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.678221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.678421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.678446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.678568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.678610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.678751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.678779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.678946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.678971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.679099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.679123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.679267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.679312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.679437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.679461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.679652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.679680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.248 qpair failed and we were unable to recover it. 00:34:19.248 [2024-07-25 05:54:12.679862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.248 [2024-07-25 05:54:12.679889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.680014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.680039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.680166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.680190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.680354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.680380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.680529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.680554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.680698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.680722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.680854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.680881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.681057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.681083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.681261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.681301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.681445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.681470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.681633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.681658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.681821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.681850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.682045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.682071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.682195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.682219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.682416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.682441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.682589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.682632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.682778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.682803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.682937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.682979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.683140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.683169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.683309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.683334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.683449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.683474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.683644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.683669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.683788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.683813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.683961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.684002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.684132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.684160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.684311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.684336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.684455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.684481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.684634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.684663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.684792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.684817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.684940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.684966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.685114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.685141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.685304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.685330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.685461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.685486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.685640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.685665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.685783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.685809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.685955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.685980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.686121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.686149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.686317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.686344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.249 qpair failed and we were unable to recover it. 00:34:19.249 [2024-07-25 05:54:12.686469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.249 [2024-07-25 05:54:12.686494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.686678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.686706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.686889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.686914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.687086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.687115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.687293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.687336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.687460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.687486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.687670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.687696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.687824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.687851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.688000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.688026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.688159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.688200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.688380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.688407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.688588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.688613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.688753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.688781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.688924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.688952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.689132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.689158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.689306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.689332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.689459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.689485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.689616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.689642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.689835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.689864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.690029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.690056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.690202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.690227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.690409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.690435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.690607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.690632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.690781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.690806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.690929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.690970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.691109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.691138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.691299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.691325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.691472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.691497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.691654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.691681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.691829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.691855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.691967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.691996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.692169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.692198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.692360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.250 [2024-07-25 05:54:12.692386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.250 qpair failed and we were unable to recover it. 00:34:19.250 [2024-07-25 05:54:12.692512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.692536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.692685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.692710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.692868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.692894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.693036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.693064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.693239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.693279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.693434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.693460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.693590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.693615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.693775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.693800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.693922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.693947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.694114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.694142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.694269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.694310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.694476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.694502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.694634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.694660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.694785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.694812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.694931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.694958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.695104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.695128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.695302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.695328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.695485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.695510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.695653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.695678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.695818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.695844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.696002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.696029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.696180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.696207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.696357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.696382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.696501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.696527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.696680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.696709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.696894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.696921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.697132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.697162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.697320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.697346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.697496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.697523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.697697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.697722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.697850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.697875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.698028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.698054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.698198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.698225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.698380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.698406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.698578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.698605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.698796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.698821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.698975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.699002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.699184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.251 [2024-07-25 05:54:12.699226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.251 qpair failed and we were unable to recover it. 00:34:19.251 [2024-07-25 05:54:12.699415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.699441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.699587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.699613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.699768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.699794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.699986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.700012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.700135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.700161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.700327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.700352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.700521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.700560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.700698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.700726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.700875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.700902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.701081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.701107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.701290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.701317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.701463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.701505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.701672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.701714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.701852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.701901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.702055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.702082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.702200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.702227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.702373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.702399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.702542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.702566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.702774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.702800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.702917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.702943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.703092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.703117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.703304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.703345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.703497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.703524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.703645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.703670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.703796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.703822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.703980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.704006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.704162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.704188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.704367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.704396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.704556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.704598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.704735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.704777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.704934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.704978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.705156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.705181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.705334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.705361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.705531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.705573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.705732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.705776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.705919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.705961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.706114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.706141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.706316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.252 [2024-07-25 05:54:12.706342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.252 qpair failed and we were unable to recover it. 00:34:19.252 [2024-07-25 05:54:12.706457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.706482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.706614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.706640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.706787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.706815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.706966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.706992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.707137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.707162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.707306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.707332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.707482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.707507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.707682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.707707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.707828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.707853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.707981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.708006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.708140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.708179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.708343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.708377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.708532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.708559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.708712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.708738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.708893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.708918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.709059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.709084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.709252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.709279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.709400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.709426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.709556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.709580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.709699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.709724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.709874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.709898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.710071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.710096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.710261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.710300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.710435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.710462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.710590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.710617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.710794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.710821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.710945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.710971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.711086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.711112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.711253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.711281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.711431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.711461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.711634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.711659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.711808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.711834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.712007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.712032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.712181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.712206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.712337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.712365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.712521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.712548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.712669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.712695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.712870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.253 [2024-07-25 05:54:12.712896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.253 qpair failed and we were unable to recover it. 00:34:19.253 [2024-07-25 05:54:12.713072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.713098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.713252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.713278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.713455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.713480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.713631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.713657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.713783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.713808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.713964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.713991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.714109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.714135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.714263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.714289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.714461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.714486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.714637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.714662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.714778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.714803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.714951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.714976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.715154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.715179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.715325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.715351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.715490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.715516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.715659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.715685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.715835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.715862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.716046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.716074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.716229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.716265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.716444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.716470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.716643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.716669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.716794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.716819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.716990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.717016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.717165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.717192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.717343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.717369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.717480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.717505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.717623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.717649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.717770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.717797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.717920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.717947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.718098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.718123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.718255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.718282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.718408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.718433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.718629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.718668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.254 [2024-07-25 05:54:12.718834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.254 [2024-07-25 05:54:12.718861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.254 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.719013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.719039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.719217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.719248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.719375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.719401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.719550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.719575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.719722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.719748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.719902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.719929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.720108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.720134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.720263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.720290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.720441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.720466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.720598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.720623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.720734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.720760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.720910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.720939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.721120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.721146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.721271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.721298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.721442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.721467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.721588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.721613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.721765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.721791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.721946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.721971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.722122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.722149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.722298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.722324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.722441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.722467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.722611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.722637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.722795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.722820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.722965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.722991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.723146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.723173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.723332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.723361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.723509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.723535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.723685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.255 [2024-07-25 05:54:12.723711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.255 qpair failed and we were unable to recover it. 00:34:19.255 [2024-07-25 05:54:12.723855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.723881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.724038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.724063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.724218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.724249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.724431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.724456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.724608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.724634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.724786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.724812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.724964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.724991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.725116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.725141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.725301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.725327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.725477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.725504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.725661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.725691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.725817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.725842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.726002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.726029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.726149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.726176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.726351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.726377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.726495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.726521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.726668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.726694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.726824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.726849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.727009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.727034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.727161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.727186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.727339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.727365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.727541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.727567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.727717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.727742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.727867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.727892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.728045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.728072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.728193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.728219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.728343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.728370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.728497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.728524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.728651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.728677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.728828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.728855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.729005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.729030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.729183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.729208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.729365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.729391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.729544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.729570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.729683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.729708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.729851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.729877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.729997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.730022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.256 [2024-07-25 05:54:12.730190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.256 [2024-07-25 05:54:12.730219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.256 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.730381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.730407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.730556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.730581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.730701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.730727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.730840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.730864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.730991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.731017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.731135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.731159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.731308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.731334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.731480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.731505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.731628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.731653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.731776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.731801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.731950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.731976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.732096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.732120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.732254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.732280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.732411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.732435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.732582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.732606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.732728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.732754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.732875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.732899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.733036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.733061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.733194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.733219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.733353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.733377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.733527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.733553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.733688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.733713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.733827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.733852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.733979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.734004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.734121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.734145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.734282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.734309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.734438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.734463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.734613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.734638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.734767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.734792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.734934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.734959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.735098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.735124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.735266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.735292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.735408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.735433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.735571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.735598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.735749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.735773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.735920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.735946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.736070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.736095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.257 [2024-07-25 05:54:12.736236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-25 05:54:12.736269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.257 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.736397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.736423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.736588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.736614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.736736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.736764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.736914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.736939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.737068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.737093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.737213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.737238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.737368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.737394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.737518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.737542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.737693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.737719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.737865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.737891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.738040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.738065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.738198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.738223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.738385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.738411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.738561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.738586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.738727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.738752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.738879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.738905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.739060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.739084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.739212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.739238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.739514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.739540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.739664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.739690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.739820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.739844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.740020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.740045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.740166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.740192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.740338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.740365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.740521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.740545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.740692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.740717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.740842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.740867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.741016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.741041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.741185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.741210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.741349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.741381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.741503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.741528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.741703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.741728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.741874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.741899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.742042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.742066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.742218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.742249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.742373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.742398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.742525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.742549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.742712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-25 05:54:12.742737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.258 qpair failed and we were unable to recover it. 00:34:19.258 [2024-07-25 05:54:12.742896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.742921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.743038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.743063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.743186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.743211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.743343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.743368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.743518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.743543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.743691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.743717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.743836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.743860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.743982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.744007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.744162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.744188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.744327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.744353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.744474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.744500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.744640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.744665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.744810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.744835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.744978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.745003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.745163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.745187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.745315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.745342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.745467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.745493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.745622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.745646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.745767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.745794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.745935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.745960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.746100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.746125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.746285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.746311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.746488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.746513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.746662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.746687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.746916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.746942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.747062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.747088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.747214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.747238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.747379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.747405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.747553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.747577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.747701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.747726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.747850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.747876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.747997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.748022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.748146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.748177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.748322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.748348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.748473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.748499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.748618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.748643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.748816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.748841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.748979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.749004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.259 qpair failed and we were unable to recover it. 00:34:19.259 [2024-07-25 05:54:12.749122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-25 05:54:12.749147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.749311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.749337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.749480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.749506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.749639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.749665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.749784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.749808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.749935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.749960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.750104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.750129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.750256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.750282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.750416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.750442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.750610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.750635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.750800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.750826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.750986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.751012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.751140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.751165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.751299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.751324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.751472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.751498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.751651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.751675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.751851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.751876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.751993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.752019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.752167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.752192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.752336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.752362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.752494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.752519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.752692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.752721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.752926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.752952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.753126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.753151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.753282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.753308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.753469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.753495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.753642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.753667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.753793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.753818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.753946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.753971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.260 [2024-07-25 05:54:12.754118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.260 [2024-07-25 05:54:12.754144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.260 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.754261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.754287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.754434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.754460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.754592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.754618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.754795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.754820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.754949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.754974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.755165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.755203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.755345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.755373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.755524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.755550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.755678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.755705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.755830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.755856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.755998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.756023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.756199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.756224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.756381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.756406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.756529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.756554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.756698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.756723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.756871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.756896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.757016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.757041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.757158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.757183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.757343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.757373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.757538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.757564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.757713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.757738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.757863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.757888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.758037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.758062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.758186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.758211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.758365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.758390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.758540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.758564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.758687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.758713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.758863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.758888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.759058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.759082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.759236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.759269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.759418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.759443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.759608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.759633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.759785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.759814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.759935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.759960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.760109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.760134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.760282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.760308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.760437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.760462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.261 [2024-07-25 05:54:12.760597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.261 [2024-07-25 05:54:12.760636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.261 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.760764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.760791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.760914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.760940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.761057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.761082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.761205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.761231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.761395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.761421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.761544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.761569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.761721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.761747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.761868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.761894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.762049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.762075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.762206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.762231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.762369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fd620 is same with the state(5) to be set 00:34:19.262 [2024-07-25 05:54:12.762522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.762551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.762714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.762739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.762858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.762883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.763008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.763033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.763158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.763182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.763358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.763384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.763505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.763531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.763656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.763682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.763803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.763828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.763954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.763978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.764119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.764144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.764297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.764333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.764461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.764486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.764598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.764622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.764747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.764774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.764914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.764941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.765122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.765148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.765267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.765294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.765422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.765447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.765593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.765618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.765747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.765774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.765899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.765925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.766051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.766077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.766208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.766234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.766369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.766402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.766526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.766552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.766731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.766756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.262 qpair failed and we were unable to recover it. 00:34:19.262 [2024-07-25 05:54:12.766873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.262 [2024-07-25 05:54:12.766899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.767047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.767071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.767186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.767211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.767366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.767392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.767508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.767533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.767662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.767687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.767848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.767874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.768005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.768030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.768155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.768180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.768326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.768353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.768477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.768502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.768662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.768687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.768818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.768843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.768964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.768989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.769151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.769176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.769304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.769330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.769477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.769502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.769629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.769655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.769774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.769800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.769926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.769952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.770080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.770105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.770252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.770278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.770399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.770424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.770539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.770564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.770716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.770741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.770860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.770885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.771027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.771052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.771176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.771201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.771358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.771384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.771508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.771533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.771659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.771684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.771803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.771829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.771959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.771984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.772132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.772157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.772307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.772333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.772455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.772480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.772602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.772628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.772744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.263 [2024-07-25 05:54:12.772774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.263 qpair failed and we were unable to recover it. 00:34:19.263 [2024-07-25 05:54:12.772922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.772947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.773090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.773115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.773270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.773296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.773445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.773471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.773599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.773624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.773782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.773807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.773935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.773960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.774082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.774108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.774262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.774287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.774442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.774467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.774649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.774674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.774791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.774816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.774939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.774964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.775137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.775177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.775346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.775373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.775503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.775531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.775666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.775692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.775811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.775836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.775982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.776009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.776159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.776184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.776336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.776362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.776485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.776512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.776652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.776679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.776833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.776861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.776994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.777021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.777174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.777201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.777350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.777391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.777522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.777548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.777670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.777697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.777823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.777850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.778013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.778038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.264 qpair failed and we were unable to recover it. 00:34:19.264 [2024-07-25 05:54:12.778198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.264 [2024-07-25 05:54:12.778238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.778388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.778415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.778540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.778567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.778693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.778719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.778872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.778898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.779071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.779097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.779246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.779285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.779451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.779477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.779609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.779637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.779792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.779819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.779945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.779970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.780100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.780125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.780262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.780289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.780413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.780438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.780558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.780583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.780712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.780739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.780872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.780899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.781022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.781049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.781257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.781297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.781434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.781461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.781585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.781613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.781770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.781796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.781959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.781987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.782120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.782145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.782263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.782289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.782417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.782443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.782569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.782594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.782721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.782746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.782895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.782921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.783053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.783078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.783192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.783217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.783364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.783390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.783537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.783563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.783682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.783710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.783833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.783860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.784014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.784040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.784198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.784226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.784401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.784426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.265 [2024-07-25 05:54:12.784578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.265 [2024-07-25 05:54:12.784604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.265 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.784723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.784749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.784895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.784921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.785045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.785072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.785192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.785218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.785361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.785387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.785512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.785538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.785682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.785707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.785850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.785876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.786025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.786052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.786202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.786228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.786359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.786389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.786512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.786537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.786653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.786679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.786853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.786878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.787004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.787030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.787149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.787174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.787332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.787358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.787477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.787502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.787655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.787680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.787807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.787832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.787953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.787978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.788098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.788123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.788261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.788288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.788409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.788434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.788565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.788590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.788707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.788732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.788848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.788872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.789022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.789047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.789158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.789182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.789346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.789372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.789509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.789535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.789661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.789686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.789832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.789857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.790003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.790028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.790176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.790201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.790335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.790361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.790496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.790521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.790645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.266 [2024-07-25 05:54:12.790669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.266 qpair failed and we were unable to recover it. 00:34:19.266 [2024-07-25 05:54:12.790850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.790875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.790999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.791023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.791172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.791197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.791361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.791387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.791509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.791536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.791684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.791713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.791861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.791887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.792015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.792040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.792190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.792215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.792361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.792387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.792547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.792572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.792708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.792733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.792864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.792889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.793047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.793073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.793196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.793221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.793343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.793369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.793518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.793544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.793666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.793691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.793812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.793837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.793963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.793988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.794107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.794132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.794264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.794290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.794407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.794433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.794593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.794619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.794744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.794769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.794899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.794925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.795111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.795136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.795305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.795331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.795446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.795471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.795597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.795622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.795751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.795776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.795901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.795926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.796054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.796079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.796230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.796269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.796395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.796421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.796544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.796569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.796695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.796720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.796868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.267 [2024-07-25 05:54:12.796894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.267 qpair failed and we were unable to recover it. 00:34:19.267 [2024-07-25 05:54:12.797041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.797066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.797192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.797217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.797381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.797411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.797534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.797560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.797677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.797703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.797836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.797860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.797990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.798016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.798145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.798170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.798312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.798337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.798471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.798497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.798619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.798644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.798808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.798832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.798956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.798982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.799113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.799137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.799265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.799291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.799436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.799461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.799614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.799638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.799757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.799783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.799926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.799951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.800076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.800101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.800214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.800239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.800375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.800401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.800559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.800584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.800704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.800729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.800856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.800881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.801005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.801030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.801165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.801191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.801319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.801344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.801468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.801493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.801626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.801652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.801781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.801806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.801945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.801972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.802101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.802129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.802278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.802303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.802439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.802464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.802587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.802612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.802730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.802755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.802876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.802902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.268 [2024-07-25 05:54:12.803056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.268 [2024-07-25 05:54:12.803081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.268 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.803202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.803228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.803377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.803403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.803520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.803546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.803687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.803713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.803844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.803873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.804010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.804036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.804158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.804183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.804317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.804343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.804465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.804491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.804609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.804634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.804756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.804781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.804911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.804936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.805057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.805083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.805209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.805233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.805395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.805419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.805564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.805590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.805715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.805740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.805866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.805892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.806047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.806072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.806198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.806222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.806383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.806409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.806532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.806557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.806687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.806712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.806836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.806861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.806979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.807003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.807126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.807151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.807297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.807323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.807487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.807512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.807628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.807654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.807804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.807830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.807975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.807999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.269 [2024-07-25 05:54:12.808151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.269 [2024-07-25 05:54:12.808183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.269 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.808331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.808358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.808481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.808505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.808665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.808690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.808831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.808856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.808993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.809018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.809154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.809179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.809321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.809348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.809471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.809495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.809617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.809642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.809773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.809798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.809927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.809953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.810113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.810138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.810262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.810289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.810414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.810439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.810588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.810613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.810736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.810761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.810912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.810936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.811079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.811104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.811230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.811268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.811421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.811446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.811601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.811626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.811778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.811804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.811945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.811969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.812092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.812117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.812269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.812299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.812427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.812451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.812566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.812590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.812740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.812765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.812893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.812918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.813072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.813097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.813220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.813253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.813415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.813440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.813563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.813589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.813732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.813757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.813907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.813931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.814064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.814089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.814228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.814261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.814414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.270 [2024-07-25 05:54:12.814439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.270 qpair failed and we were unable to recover it. 00:34:19.270 [2024-07-25 05:54:12.814595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.814620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.814741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.814765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.814895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.814924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.815043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.815068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.815190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.815215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.815346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.815372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.815494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.815518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.815670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.815697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.815830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.815856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.815981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.816006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.816146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.816171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.816319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.816345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.816469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.816494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.816638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.816664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.816789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.816814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.816961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.816987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.817114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.817139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.817264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.817293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.817415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.817440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.817588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.817614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.817740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.817765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.817891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.817917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.818041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.818067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.818218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.818252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.818384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.818409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.818523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.818548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.818691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.818718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.818866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.818892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.819053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.819078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.819230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.819268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.819389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.819414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.819560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.819585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.819705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.819730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.819848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.819874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.820017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.820042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.820203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.820228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.820408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.820435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.820588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.820613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.271 qpair failed and we were unable to recover it. 00:34:19.271 [2024-07-25 05:54:12.820727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.271 [2024-07-25 05:54:12.820753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.820882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.820907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.821029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.821054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.821180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.821205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.821374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.821400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.821544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.821583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.821714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.821743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.821874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.821901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.822053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.822079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.822206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.822231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.822428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.822454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.822605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.822631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.822752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.822777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.822896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.822922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.823046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.823073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.823247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.823273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.823405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.823430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.823558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.823583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.823729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.823758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.823880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.823906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.824066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.824092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.824236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.824269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.824396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.824422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.824580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.824607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.824764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.824790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.824938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.824964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.825118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.825145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.825272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.825302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.825457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.825482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.825605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.825630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.825751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.825776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.825906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.825932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.826057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.826082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.826198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.826223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.826373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.826399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.826545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.826570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.826695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.826720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.826865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.272 [2024-07-25 05:54:12.826891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.272 qpair failed and we were unable to recover it. 00:34:19.272 [2024-07-25 05:54:12.827043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.827069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.827223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.827256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.827379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.827404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.827524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.827549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.827695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.827720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.827848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.827873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.828056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.828082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.828201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.828226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.828412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.828438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.828562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.828588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.828765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.828791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.828939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.828965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.829082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.829108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.829257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.829283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.829430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.829456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.829606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.829631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.829742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.829767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.829897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.829922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.830046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.830070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.830219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.830252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.830375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.830400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.830560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.830591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.830756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.830782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.830928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.830952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.831110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.831149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.831296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.831324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.831455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.831482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.831633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.831659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.831804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.831829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.831956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.831983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.832152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.832179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.832328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.832354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.832491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.832516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.832659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.832684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.832808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.832833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.832982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.833007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.833126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.833152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.833283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.833309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.273 [2024-07-25 05:54:12.833466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.273 [2024-07-25 05:54:12.833492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.273 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.833615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.833640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.833771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.833795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.833916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.833940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.834095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.834133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.834295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.834322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.834450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.834477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.834611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.834636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.834814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.834839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.834967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.834992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.835116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.835146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.835269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.835297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.835424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.835449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.835582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.835608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.835753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.835779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.835910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.835934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.836116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.836143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.836289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.836315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.836490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.836515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.836693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.836718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.836840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.836867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.837000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.837027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.837163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.837202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.837381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.837409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.837533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.837559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.837709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.837735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.837861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.837886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.838015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.838039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.838196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.838235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.838369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.838396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.838521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.838546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.838674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.838699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.274 qpair failed and we were unable to recover it. 00:34:19.274 [2024-07-25 05:54:12.838878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.274 [2024-07-25 05:54:12.838904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.839027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.839053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.839175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.839202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.839338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.839363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.839485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.839511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.839636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.839666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.839821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.839846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.840000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.840025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.840151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.840177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.840306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.840333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.840458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.840485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.840607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.840632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.840762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.840788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.840941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.840966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.841085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.841109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.841233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.841268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.841393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.841419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.841576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.841600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.841726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.841751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.841882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.841907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.842052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.842076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.842221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.842256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.842379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.842403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.842550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.842574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.842730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.842757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.842933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.842958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.843107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.843131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.843268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.843294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.843435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.843459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.843607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.843631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.843771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.843798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.843920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.843947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.844077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.844106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.844265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.844292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.844467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.844492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.844670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.844696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.844817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.844842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.844969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.844993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.275 [2024-07-25 05:54:12.845110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.275 [2024-07-25 05:54:12.845135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.275 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.845266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.845291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.845437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.845461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.845581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.845607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.845786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.845811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.845965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.845990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.846113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.846139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.846268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.846294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.846438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.846477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.846638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.846665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.846790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.846816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.846939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.846966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.847147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.847172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.847327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.847355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.847499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.847525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.847676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.847703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.847826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.847852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.848031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.848059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.848198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.848236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.848404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.848431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.848583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.848608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.848740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.848770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.848892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.848918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.849050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.849077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.849206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.849233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.849396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.849423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.849554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.849580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.849705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.849732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.849858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.849884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.850034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.850059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.850191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.850217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.850345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.850371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.850500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.850526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.850671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.850697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.850851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.850876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.851035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.851061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.851194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.851233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.851385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.851412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.276 [2024-07-25 05:54:12.851539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.276 [2024-07-25 05:54:12.851564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.276 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.851731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.851757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.851908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.851934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.852065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.852090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.852209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.852236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.852394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.852420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.852546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.852572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.852693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.852719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.852893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.852918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.853046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.853072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.853196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.853228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.853376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.853402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.853567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.853592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.853736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.853762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.853907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.853933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.854084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.854109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.854234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.854266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.854399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.854427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.854576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.854602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.854730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.854756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.854878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.854905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.855036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.855062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.855184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.855210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.855390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.855416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.855544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.855571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.855693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.855719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.855868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.855893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.856016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.856042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.856180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.856219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.856386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.856414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.856541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.856567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.856688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.856714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.856845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.856872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.857004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.857031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.857183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.857210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.857370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.857397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.857515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.857540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.857692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.857722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.857850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.277 [2024-07-25 05:54:12.857876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.277 qpair failed and we were unable to recover it. 00:34:19.277 [2024-07-25 05:54:12.858023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.858049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.858172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.858197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.858358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.858384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.858506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.858533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.858651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.858676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.858800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.858826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.858939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.858965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.859093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.859118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.859270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.859296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.859448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.859473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.859589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.859615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.859738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.859764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.859921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.859947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.860069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.860095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.860222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.860257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.860386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.860412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.860536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.860561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.860712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.860738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.860865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.860891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.861019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.861045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.861166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.861192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.861338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.861364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.861492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.861518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.861679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.861705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.861821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.861846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.861968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.861995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.862124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.862150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.862276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.862303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.862431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.862456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.862604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.862630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.862755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.862781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.862935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.278 [2024-07-25 05:54:12.862961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.278 qpair failed and we were unable to recover it. 00:34:19.278 [2024-07-25 05:54:12.863110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.863136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.863255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.863281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.863411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.863437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.863584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.863609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.863726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.863751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.863873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.863899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.864015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.864040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.864161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.864191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.864341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.864368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.864496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.864521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.864645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.864671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.864795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.864820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.864970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.864996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.865150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.865175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.865301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.865328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.865453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.865480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.865611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.865637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.865755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.865780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.865936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.865962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.866086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.866112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.866286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.866326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.866465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.866492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.866621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.866648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.866772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.866797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.866921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.866946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.867071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.867096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.867224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.867262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.867414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.867440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.867581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.867606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.867760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.867786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.867947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.867972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.868109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.868134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.868286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.868313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.868443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.868469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.868593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.868623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.868756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.279 [2024-07-25 05:54:12.868781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.279 qpair failed and we were unable to recover it. 00:34:19.279 [2024-07-25 05:54:12.868896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.868922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.869067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.869092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.869252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.869291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.869429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.869457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.869606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.869632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.869780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.869806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.869935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.869961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.870117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.870143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.870325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.870353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.870477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.870503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.870659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.870685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.870803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.870829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.870960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.870985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.871110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.871136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.871287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.871313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.871431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.871456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.871582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.871607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.871755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.871780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.871918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.871943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.872062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.872088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.872211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.872236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.872380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.872406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.872555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.872581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.872704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.872729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.872852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.872878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.873017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.873047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.873194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.873219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.873384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.873409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.873529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.873555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.873677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.873702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.873854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.873880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.874010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.874036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.874188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.874214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.874343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.874369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.874484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.874509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.280 qpair failed and we were unable to recover it. 00:34:19.280 [2024-07-25 05:54:12.874636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.280 [2024-07-25 05:54:12.874662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.874819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.874844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.874977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.875002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.875128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.875153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.875308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.875335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.875457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.875483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.875630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.875656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.875769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.875794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.875918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.875945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.876064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.876090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.876255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.876281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.876406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.876431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.876555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.876580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.876707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.876733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.876854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.876880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.877005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.877030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.877178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.877204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.877338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.877364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.877546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.877571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.877692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.877718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.877834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.877860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.877988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.878013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.878171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.878197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.878325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.878351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.878470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.878497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.878622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.878648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.878827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.878852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.879001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.879026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.879147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.879172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.879303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.879329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.879481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.879507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.879630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.879659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.879778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.879803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.879934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.879960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.880081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.880107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.880229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.281 [2024-07-25 05:54:12.880263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.281 qpair failed and we were unable to recover it. 00:34:19.281 [2024-07-25 05:54:12.880408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.880437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.880563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.880590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.880725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.880750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.880900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.880925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.881051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.881078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.881210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.881236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.881363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.881389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.881509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.881534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.881664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.881690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.881845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.881871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.881996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.882021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.882172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.882198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.882334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.882373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.882505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.882532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.882662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.882688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.882839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.882865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.883012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.883038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.883164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.883190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.883318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.883345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.883502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.883528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.883699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.883724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.883849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.883874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.883994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.884024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.884151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.884178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.884300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.884326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.884489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.884515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.884656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.884682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.884855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.884881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.885002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.885027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.885152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.885177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.885333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.885359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.885488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.885515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.885636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.885662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.885835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.885860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.885984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.886010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.886131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.886156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.886298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.886338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.886499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.886526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.282 [2024-07-25 05:54:12.886640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.282 [2024-07-25 05:54:12.886666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.282 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.886796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.886824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.886993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.887019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.887143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.887169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.887299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.887325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.887503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.887529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.887697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.887722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.887879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.887906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.888061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.888087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.888215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.888252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.888400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.888426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.888542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.888574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.888746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.888772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.888920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.888948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.889122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.889148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.889281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.889307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.889426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.889452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.889572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.889597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.889746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.889771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.889886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.889911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.890041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.890066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.890218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.890248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.890370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.890395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.890539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.890565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.890686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.890711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.890896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.890922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.891077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.891102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.891229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.891264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.891420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.891445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.891623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.891648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.283 qpair failed and we were unable to recover it. 00:34:19.283 [2024-07-25 05:54:12.891762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.283 [2024-07-25 05:54:12.891788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.891909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.891935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.892059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.892084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.892203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.892228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.892380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.892406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.892534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.892561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.892736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.892762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.892910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.892935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.893058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.893087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.893232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.893266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.893411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.893437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.893589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.893614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.893765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.893790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.893942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.893967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.894117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.894142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.894288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.894314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.894425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.894451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.894565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.894591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.894737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.894762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.894882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.894907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.895060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.895085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.895215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.895255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.895383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.895408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.895590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.895615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.895733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.895758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.895884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.895909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.896082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.896107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.896222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.896254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.896375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.896402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.896535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.896561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.896690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.896716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.896870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.896895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.897046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.897073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.897235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.897270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.897396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.897421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.897580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.897605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.897731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.897758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.897903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.897929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.284 qpair failed and we were unable to recover it. 00:34:19.284 [2024-07-25 05:54:12.898050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.284 [2024-07-25 05:54:12.898075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.898227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.898261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.898424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.898450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.898611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.898636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.898764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.898790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.898936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.898964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.899089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.899114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.899263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.899290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.899439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.899464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.899593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.899618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.899738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.899764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.899934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.899964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.900092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.900118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.900249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.900275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.900429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.900454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.900609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.900634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.900806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.900831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.900982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.901007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.901154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.901179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.901295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.901323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.901455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.901480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.901605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.901630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.901783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.901808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.901933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.901958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.902086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.902112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.902267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.902293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.902439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.902465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.902611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.902636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.902759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.902785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.902929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.902954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.903128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.903154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.903302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.903328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.903476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.903502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.903622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.903648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.903816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.903842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.903968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.903995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.904159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.904185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.904345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.904371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.904523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.904553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.904700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.285 [2024-07-25 05:54:12.904726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.285 qpair failed and we were unable to recover it. 00:34:19.285 [2024-07-25 05:54:12.904877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.904902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.905053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.905079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.905199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.905226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.905411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.905437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.905584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.905611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.905732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.905758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.905907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.905933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.906092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.906117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.906265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.906291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.906451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.906477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.906630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.906655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.906777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.906802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.906976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.907002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.907137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.907162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.907303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.907329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.907463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.907489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.907606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.907632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.907783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.907809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.907954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.907979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.908126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.908152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.908284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.908311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.908468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.908493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.908665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.908691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.908842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.908867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.908993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.909018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.909133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.909159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.909327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.909353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.909502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.909528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.909705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.909731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.909877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.909902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.910048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.910073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.910250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.910277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.910406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.910432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.910586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.910611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.910760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.910787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.910977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.911002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.911121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.911146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.911330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.911356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.911481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.911506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.911658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.911687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.911862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.911888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.912029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.912054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.912232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.912269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.912417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.286 [2024-07-25 05:54:12.912443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.286 qpair failed and we were unable to recover it. 00:34:19.286 [2024-07-25 05:54:12.912566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.912591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.912713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.912738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.912857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.912882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.913012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.913038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.913191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.913217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.913353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.913381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.913525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.913560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.913745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.913773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.913908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.913934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.914062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.914087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.914249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.914278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.914425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.914450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.914626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.914651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.914780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.914807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.914927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.914952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.915102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.915128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.915259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.915285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.915449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.915478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.915628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.915654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.915777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.915802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.915951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.915977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.916124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.916150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.916321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.916352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.916506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.916532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.916685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.916710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.916861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.287 [2024-07-25 05:54:12.916887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.287 qpair failed and we were unable to recover it. 00:34:19.287 [2024-07-25 05:54:12.917047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.917084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.917249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.917291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.917448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.917488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.917681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.917732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.917911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.917948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.918102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.918135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.918266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.918293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.918426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.918452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.918583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.918613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.918783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.918811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.918983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.919024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.919168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.919197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.919341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.919371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.919534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.919560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.919745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.919772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.919927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.919953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.920106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.920133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.920325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.920353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.920488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.920516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.920667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.920692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.920844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.920872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.921022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.921048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.921197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.921224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.921376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.921409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.921559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.921585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.571 [2024-07-25 05:54:12.921737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-07-25 05:54:12.921764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.571 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.921918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.921943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.922102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.922128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.922257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.922314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.922437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.922463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.922623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.922649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.922777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.922803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.922926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.922951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.923066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.923091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.923260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.923298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.923426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.923453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.923632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.923658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.923792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.923817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.923982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.924007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.924146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.924173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.924331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.924359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.924512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.924539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.924685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.924711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.924835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.924862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.925042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.925068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.925192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.925218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.925379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.925405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.925592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.925618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.925776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.925802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.925953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.925978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.926121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.926156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.926290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.926316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.926445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.926472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.926626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.926651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.926802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.926829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.926976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.927002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.927129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.927155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.927340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.927366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.927490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.927523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.927703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.927730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.927877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.927903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.928054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.928081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.928198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-07-25 05:54:12.928223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.572 qpair failed and we were unable to recover it. 00:34:19.572 [2024-07-25 05:54:12.928389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.928416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.928598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.928624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.928771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.928797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.928923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.928951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.929125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.929151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.929300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.929328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.929457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.929482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.929649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.929674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.929799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.929826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.929978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.930005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.930153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.930180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.930332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.930359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.930483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.930509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.930638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.930664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.930816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.930842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.931017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.931043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.931189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.931215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.931393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.931418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.931600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.931626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.931752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.931778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.931927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.931953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.932103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.932130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.932282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.932323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.932481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.932506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.932629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.932656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.932783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.932810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.932963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.932988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.933140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.933170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.933292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.933320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.933475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.933503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.933655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.933680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.933835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.933861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.934041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.934067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.934249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.934275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.934420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.934446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.934600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.934625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.934806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.934832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.573 qpair failed and we were unable to recover it. 00:34:19.573 [2024-07-25 05:54:12.934988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-07-25 05:54:12.935014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.935165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.935192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.935367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.935393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.935526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.935552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.935733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.935759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.935907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.935933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.936061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.936089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.936212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.936238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.936396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.936422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.936574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.936599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.936749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.936775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.936897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.936922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.937077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.937103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.937252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.937288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.937408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.937434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.937558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.937584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.937711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.937736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.937862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.937889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.938042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.938068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.938191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.938217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.938372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.938398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.938545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.938571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.938688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.938713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.938863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.938888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.939036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.939061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.939184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.939210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.939358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.939384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.939519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.939557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.939706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.939734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.939856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.939881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.940059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.940090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.940249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.940275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.940435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.940460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.940582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.940608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.940722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.940748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.940921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.940947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.941119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.941145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.941266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.574 [2024-07-25 05:54:12.941293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.574 qpair failed and we were unable to recover it. 00:34:19.574 [2024-07-25 05:54:12.941465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.941492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.941641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.941668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.941824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.941850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.941999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.942025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.942149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.942175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.942306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.942333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.942491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.942517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.942635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.942661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.942828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.942853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.943000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.943026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.943155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.943181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.943300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.943326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.943472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.943498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.943648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.943673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.943847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.943873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.944020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.944046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.944175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.944200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.944354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.944382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.944532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.944558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.944709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.944735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.944891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.944917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.945071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.945097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.945253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.945279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.945430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.945456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.945607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.945632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.945750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.945777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.945929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.945956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.946119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.946145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.946290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.946316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.946471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.946497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.946630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.946655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.946778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.575 [2024-07-25 05:54:12.946804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.575 qpair failed and we were unable to recover it. 00:34:19.575 [2024-07-25 05:54:12.946959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.946985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.947114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.947144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.947259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.947286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.947440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.947466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.947621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.947647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.947799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.947825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.947945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.947970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.948088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.948113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.948286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.948313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.948458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.948484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.948609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.948635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.948757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.948783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.948936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.948962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.949110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.949136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.949283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.949310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.949433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.949458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.949578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.949604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.949748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.949773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.949900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.949926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.950070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.950095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.950274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.950301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.950449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.950475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.950647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.950673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.950796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.950822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.950949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.950975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.951099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.951124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.951250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.951277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.951408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.951434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.951582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.951612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.951761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.951786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.951909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.951935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.952084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.952109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.952231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.952266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.952389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.952415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.952539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.952564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.576 [2024-07-25 05:54:12.952683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.576 [2024-07-25 05:54:12.952709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.576 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.952853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.952879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.953053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.953079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.953197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.953223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.953348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.953374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.953551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.953577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.953699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.953724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.953870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.953896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.954043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.954069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.954222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.954253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.954402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.954428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.954574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.954600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.954750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.954776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.954922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.954947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.955066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.955092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.955250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.955277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.955403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.955429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.955577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.955603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.955752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.955779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.955903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.955929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.956077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.956103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.956234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.956268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.956417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.956443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.956573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.956599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.956743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.956768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.956915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.956941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.957093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.957119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.957273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.957300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.957449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.957475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.957623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.957648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.957780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.957806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.957928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.957954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.958070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.958097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.958253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.958280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.958429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.958460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.958619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.958645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.958822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.958848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.958980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.959007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.577 qpair failed and we were unable to recover it. 00:34:19.577 [2024-07-25 05:54:12.959133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.577 [2024-07-25 05:54:12.959160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.959312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.959340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.959515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.959541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.959689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.959715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.959846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.959872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.959989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.960015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.960165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.960190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.960371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.960397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.960549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.960574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.960722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.960748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.960932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.960958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.961082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.961108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.961230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.961264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.961386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.961411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.961567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.961593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.961744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.961769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.961919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.961944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.962067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.962093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.962278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.962305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.962431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.962458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.962609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.962635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.962763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.962790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.962968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.962993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.963147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.963177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.963325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.963352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.963506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.963532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.963662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.963688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.963832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.963857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.963986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.964012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.964163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.964188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.964343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.964370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.964544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.964569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.964691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.964716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.964867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.964893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.965049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.965074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.965207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.965233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.965386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.965412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.965544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.965571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.578 [2024-07-25 05:54:12.965747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.578 [2024-07-25 05:54:12.965773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.578 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.965895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.965922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.966045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.966070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.966232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.966265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.966394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.966419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.966571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.966596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.966745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.966770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.966917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.966942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.967093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.967119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.967272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.967299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.967428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.967453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.967575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.967600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.967780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.967805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.967929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.967954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.968099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.968124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.968240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.968273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.968444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.968470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.968619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.968645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.968819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.968844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.968996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.969022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.969168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.969193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.969358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.969384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.969508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.969535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.969665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.969691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.969842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.969867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.970015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.970041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.970158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.970190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.970350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.970376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.970525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.970551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.970675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.970700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.970859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.970884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.971005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.971031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.971204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.971230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.971408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.971434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.971614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.971639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.971760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.971786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.971936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.971962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.972105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.972130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.579 [2024-07-25 05:54:12.972273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.579 [2024-07-25 05:54:12.972299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.579 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.972431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.972457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.972609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.972635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.972788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.972813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.972968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.972995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.973147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.973173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.973352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.973378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.973525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.973559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.973685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.973710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.973865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.973890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.974011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.974036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.974185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.974211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.974368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.974393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.974541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.974566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.974683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.974709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.974831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.974860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.975036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.975061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.975174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.975200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.975346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.975372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.975516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.975541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.975693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.975719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.975868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.975893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.976010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.976035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.976187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.976212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.976355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.976382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.976558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.976583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.976735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.976761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.976885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.976910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.977055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.977081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.977198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.977224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.977350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.977376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.977531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.580 [2024-07-25 05:54:12.977557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.580 qpair failed and we were unable to recover it. 00:34:19.580 [2024-07-25 05:54:12.977707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.977732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.977882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.977908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.978033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.978059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.978236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.978270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.978409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.978434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.978578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.978604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.978746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.978771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.978928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.978954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.979102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.979129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.979266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.979293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.979455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.979481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.979637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.979662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.979790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.979815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.979949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.979974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.980111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.980136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.980290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.980317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.980450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.980476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.980598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.980624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.980740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.980765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.980913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.980939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.981089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.981115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.981250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.981275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.981424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.981450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.981628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.981653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.981834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.981863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.981995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.982021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.982132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.982157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.982291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.982317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.982459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.982485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.982609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.982635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.982755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.982781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.982921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.982947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.983070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.983095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.983254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.983280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.983400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.983425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.983600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.983626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.983780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.983805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.983931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.983956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.581 qpair failed and we were unable to recover it. 00:34:19.581 [2024-07-25 05:54:12.984109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.581 [2024-07-25 05:54:12.984135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.984279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.984305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.984424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.984449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.984568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.984594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.984710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.984737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.984853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.984878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.984994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.985019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.985141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.985166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.985317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.985344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.985467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.985493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.985607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.985633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.985780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.985805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.985954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.985979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.986135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.986160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.986305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.986331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.986453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.986479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.986606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.986633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.986757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.986784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.986914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.986939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.987061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.987088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.987257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.987283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.987430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.987455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.987586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.987611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.987731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.987756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.987906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.987932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.988081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.988107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.988259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.988285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.988433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.988459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.988632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.988657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.988805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.988831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.988980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.989005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.989144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.989169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.989297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.989323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.989450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.989477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.989603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.989628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.989761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.989787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.989913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.989939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.990083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.990109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.582 [2024-07-25 05:54:12.990230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.582 [2024-07-25 05:54:12.990262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.582 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.990389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.990415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.990569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.990597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.990754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.990779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.990898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.990923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.991077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.991102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.991227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.991259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.991383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.991408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.991552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.991577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.991705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.991741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.991921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.991951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.992087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.992114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.992291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.992318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.992441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.992467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.992592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.992619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.992771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.992797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.992924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.992954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.993077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.993102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.993232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.993268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.993425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.993451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.993627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.993653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.993801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.993827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.993954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.993980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.994107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.994133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.994286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.994312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.994432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.994457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.994587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.994613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.994757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.994783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.994934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.994960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.995087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.995113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.995298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.995324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.995499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.995525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.995643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.995668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.995791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.995818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.995972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.995997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.996146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.996171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.996295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.996321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.996442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.996468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.996636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.583 [2024-07-25 05:54:12.996661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.583 qpair failed and we were unable to recover it. 00:34:19.583 [2024-07-25 05:54:12.996790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.996815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.996938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.996964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.997117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.997143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.997294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.997320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.997477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.997503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.997626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.997652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.997777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.997803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.997948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.997973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.998101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.998126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.998253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.998279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.998399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.998425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.998584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.998609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.998754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.998780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.998908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.998933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.999054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.999080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.999201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.999228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.999372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.999398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.999519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.999545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.999668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.999698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.999813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.999838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:12.999969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:12.999995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.000154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.000180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.000322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.000348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.000510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.000535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.000685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.000712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.000858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.000883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.001000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.001026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.001180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.001205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.001341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.001368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.001500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.001526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.001679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.001704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.001825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.001850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.002017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.002043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.002179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.002204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.002341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.002367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.002512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.002537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.002654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.002679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.002803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.584 [2024-07-25 05:54:13.002829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.584 qpair failed and we were unable to recover it. 00:34:19.584 [2024-07-25 05:54:13.002948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.002974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.003094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.003120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.003251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.003277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.003425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.003450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.003570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.003595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.003718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.003743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.003869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.003894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.004013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.004044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.004168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.004193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.004333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.004359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.004485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.004510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.004639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.004665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.004816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.004842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.004964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.004990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.005139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.005165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.005312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.005338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.005473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.005498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.005625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.005651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.005773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.005798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.005949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.005974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.006102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.006127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.006281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.006307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.006435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.006461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.006606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.006631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.006749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.006775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.006892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.006917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.007070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.007095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.007217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.007247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.007410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.007435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.007578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.007603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.007731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.007757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.007898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.585 [2024-07-25 05:54:13.007923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.585 qpair failed and we were unable to recover it. 00:34:19.585 [2024-07-25 05:54:13.008042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.008067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.008181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.008206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.008354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.008381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.008513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.008539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.008658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.008683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.008840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.008866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.009012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.009038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.009203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.009228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.009375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.009400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.009529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.009555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.009729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.009755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.009885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.009910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.010054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.010079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.010197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.010223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.010359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.010385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.010534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.010560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.010684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.010715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.010836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.010862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.010986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.011018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.011147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.011172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.011294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.011321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.011446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.011472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.011603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.011629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.011772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.011797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.011912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.011937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.012082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.012108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.012269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.012304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.012451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.012477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.012598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.012623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.012742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.012768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.012893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.012919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.013054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.013080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.013232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.013272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.013422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.013447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.013576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.013602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.013728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.013753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.013864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.586 [2024-07-25 05:54:13.013890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.586 qpair failed and we were unable to recover it. 00:34:19.586 [2024-07-25 05:54:13.014023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.014049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.014165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.014190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.014312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.014339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.014463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.014489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.014613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.014639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.014791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.014816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.014937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.014968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.015114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.015139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.015260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.015296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.015447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.015472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.015597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.015622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.015743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.015768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.015892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.015920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.016044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.016070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.016189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.016216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.016369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.016396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.016520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.016546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.016672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.016697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.016822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.016848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.017020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.017045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.017191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.017217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.017353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.017379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.017498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.017524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.017645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.017671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.017837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.017863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.018010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.018036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.018165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.018191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.018347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.018373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.018499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.018524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.018644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.018670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.018795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.018821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.018938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.018965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.019120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.019147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.019277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.019304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.019436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.019462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.019616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.019641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.019769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.019794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.019934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.587 [2024-07-25 05:54:13.019960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.587 qpair failed and we were unable to recover it. 00:34:19.587 [2024-07-25 05:54:13.020088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.020113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.020256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.020292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.020409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.020435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.020564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.020590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.020720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.020745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.020869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.020895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.021016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.021042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.021164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.021190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.021310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.021337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.021469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.021509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.021630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.021656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.021805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.021831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.021967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.021992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.022138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.022164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.022290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.022316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.022438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.022463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.022623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.022648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.022769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.022794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.022951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.022976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.023100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.023126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.023256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.023291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.023410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.023436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.023567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.023592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.023751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.023777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.023949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.023975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.024125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.024150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.024308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.024334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.024485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.024514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.024634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.024660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.024801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.024829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.024973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.024998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.025123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.025149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.025288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.025315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.025478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.025504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.025626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.025652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.025779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.025804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.025930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.025960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.026092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.588 [2024-07-25 05:54:13.026118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.588 qpair failed and we were unable to recover it. 00:34:19.588 [2024-07-25 05:54:13.026253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.026279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.026409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.026434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.026553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.026579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.026729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.026755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.026906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.026931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.027046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.027071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.027217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.027251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.027369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.027395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.027549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.027575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.027723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.027749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.027866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.027892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.028043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.028069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.028191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.028218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.028391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.028418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.028545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.028572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.028697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.028724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.028878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.028903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.029029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.029056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.029212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.029238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.029406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.029432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.029557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.029584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.029701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.029726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.029904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.029930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.030063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.030089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.030235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.030269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.030393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.030419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.030558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.030584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.030713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.030740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.030899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.030925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.031051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.031077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.031216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.031249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.031382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.031407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.031564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.031589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.031710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.031736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.031866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.031892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.589 qpair failed and we were unable to recover it. 00:34:19.589 [2024-07-25 05:54:13.032012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.589 [2024-07-25 05:54:13.032038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.032203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.032228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.032404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.032430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.032545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.032571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.032719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.032748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.032864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.032889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.033014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.033039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.033179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.033205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.033332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.033358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.033501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.033527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.033668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.033694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.033817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.033842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.034016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.034042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.034162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.034187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.034312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.034338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.034494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.034520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.034644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.034669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.034817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.034842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.034995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.035020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.035139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.035164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.035288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.035314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.035446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.035471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.035650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.035675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.035829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.035855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.035983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.036008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.036129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.036155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.036298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.036336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.036483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.036508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.036633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.036659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.036790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.036816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.036930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.036955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.037105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.037131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.037283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.037309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.590 qpair failed and we were unable to recover it. 00:34:19.590 [2024-07-25 05:54:13.037457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.590 [2024-07-25 05:54:13.037482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.037634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.037660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.037775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.037801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.037947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.037972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.038088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.038113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.038237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.038270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.038449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.038475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.038630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.038656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.038806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.038832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.038957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.038982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.039103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.039129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.039260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.039287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.039414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.039440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.039608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.039633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.039784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.039809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.039963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.039988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.040117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.040142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.040290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.040316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.040458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.040483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.040616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.040641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.040791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.040816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.040938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.040964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.041123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.041148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.041304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.041331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.041454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.041480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.041632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.041658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.041780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.041807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.041929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.041954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.042133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.042159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.042315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.042341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.042488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.042513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.042692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.042717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.042838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.042864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.042979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.043003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.043160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.043186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.043307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.043334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.043487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.043513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.043637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.043662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.591 [2024-07-25 05:54:13.043784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.591 [2024-07-25 05:54:13.043810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.591 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.043938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.043968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.044120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.044145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.044276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.044302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.044481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.044506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.044648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.044674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.044802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.044828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.044956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.044981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.045103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.045128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.045283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.045309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.045426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.045451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.045604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.045629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.048347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.048375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.048494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.048519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.048678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.048703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.048850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.048876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.048998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.049024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.049170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.049196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.049327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.049353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.049484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.049509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.049658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.049683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.049827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.049853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.049996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.050022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.050168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.050194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.050316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.050342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.050487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.050513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.050661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.050687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.050806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.050832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.050975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.051001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.051152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.051178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.051329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.051356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.051478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.051504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.051657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.051683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.051862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.051888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.052028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.052053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.052196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.052222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.052365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.052391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.052521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.052547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.592 [2024-07-25 05:54:13.052677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.592 [2024-07-25 05:54:13.052702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.592 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.052827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.052852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.053003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.053029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.053153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.053178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.053349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.053388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.053551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.053578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.053732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.053757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.053886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.053914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.054090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.054115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.054267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.054294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.054436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.054462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.054613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.054638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.054763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.054788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.054963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.054989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.055110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.055136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.055287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.055313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.055440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.055465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.055613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.055638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.055770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.055796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.055945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.055970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.056092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.056117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.056286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.056312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.056459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.056484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.056608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.056633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.056780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.056806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.056959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.056984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.057104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.057131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.057259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.057293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.057422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.057448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.057607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.057632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.057777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.057802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.057987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.058016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.058139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.058164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.058311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.058337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.058464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.058490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.058615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.058641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.058785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.058810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.058963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.058989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.059162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.593 [2024-07-25 05:54:13.059188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.593 qpair failed and we were unable to recover it. 00:34:19.593 [2024-07-25 05:54:13.059309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.059335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.059478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.059504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.059629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.059654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.059784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.059809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.059937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.059963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.060115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.060141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.060299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.060325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.060451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.060476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.060597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.060624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.060774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.060799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.060928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.060953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.061108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.061134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.061290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.061316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.061493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.061519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.061637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.061662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.061812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.061837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.061954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.061981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.062111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.062137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.062289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.062316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.062433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.062463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.062588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.062614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.062768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.062795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.062925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.062951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.063076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.063101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.063229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.063264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.063391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.063416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.063572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.063597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.063719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.063744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.063891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.063916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.064044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.064068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.064217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.064249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.594 [2024-07-25 05:54:13.064379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.594 [2024-07-25 05:54:13.064404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.594 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.064528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.064553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.064722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.064763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.064919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.064946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.065096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.065123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.065287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.065314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.065472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.065500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.065652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.065679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.065798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.065824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.065968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.065994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.066124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.066150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.066305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.066332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.066474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.066500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.066617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.066642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.066782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.066807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.066959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.066989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.067135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.067161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.067285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.067313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.067435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.067461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.067623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.067650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.067801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.067827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.068001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.068027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.068144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.068170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.068332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.068360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.068484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.068511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.068631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.068657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.068827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.068853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.068977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.069004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.069157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.069182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.069335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.069362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.069492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.069519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.069698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.069724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.069901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.069926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.070045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.070070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.070223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.070255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.070411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.070437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.070590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.070616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.070774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.070799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.595 [2024-07-25 05:54:13.070941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.595 [2024-07-25 05:54:13.070966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.595 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.071115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.071140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.071292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.071319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.071462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.071488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.071662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.071687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.071813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.071838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.071981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.072007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.072158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.072183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.072307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.072333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.072453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.072480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.072652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.072677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.072789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.072815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.072979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.073004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.073132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.073158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.073315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.073341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.073488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.073513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.073625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.073651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.073807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.073833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.073974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.074014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.074187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.074215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.074376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.074403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.074587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.074614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.074767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.074795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.074979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.075006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.075153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.075181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.075338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.075365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.075492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.075517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.075665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.075690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.075803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.075828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.075949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.075974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.076117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.076142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.076296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.076322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.076445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.076471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.076591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.076617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.076779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.076805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.076953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.076979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.077103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.077128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.077281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.077308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.596 [2024-07-25 05:54:13.077433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.596 [2024-07-25 05:54:13.077458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.596 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.077608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.077633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.077779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.077805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.077931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.077956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.078118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.078157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.078347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.078374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.078529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.078555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.078731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.078762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.078916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.078942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.079099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.079125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.079305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.079332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.079457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.079483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.079635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.079661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.079812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.079840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.079986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.080012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.080159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.080185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.080341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.080368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.080544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.080571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.080746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.080772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.080896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.080922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.081071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.081097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.081229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.081262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.081391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.081419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.081540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.081566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.081746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.081776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.081929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.081955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.082105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.082130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.082304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.082330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.082483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.082510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.082697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.082723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.082872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.082898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.083021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.083047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.083170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.083195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.083336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.083363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.083511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.083537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.083693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.083719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.083866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.083891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.084038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.084064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.597 qpair failed and we were unable to recover it. 00:34:19.597 [2024-07-25 05:54:13.084217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.597 [2024-07-25 05:54:13.084247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.084370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.084395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.084546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.084572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.084719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.084746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.084900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.084926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.085047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.085074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.085200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.085226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.085414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.085440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.085597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.085623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.085767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.085800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.085918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.085945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.086066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.086091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.086269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.086295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.086421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.086449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.086625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.086652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.086802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.086828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.086979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.087005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.087131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.087157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.087290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.087318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.087472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.087499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.087625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.087651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.087809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.087836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.087983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.088010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.088180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.088206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.088333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.088359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.088482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.088507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.088662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.088687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.088866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.088892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.089039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.089065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.089213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.089238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.089377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.089402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.089523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.089550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.089675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.089701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.089822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.089849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.089989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.090015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.090166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.090192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.090346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.090373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.090530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.598 [2024-07-25 05:54:13.090556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.598 qpair failed and we were unable to recover it. 00:34:19.598 [2024-07-25 05:54:13.090732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.090757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.090911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.090938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.091091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.091119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.091252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.091278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.091454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.091480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.091628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.091654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.091778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.091805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.091983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.092009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.092159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.092184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.092311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.092337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.092465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.092491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.092620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.092650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.092802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.092827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.092975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.093001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.093151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.093177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.093333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.093359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.093489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.093515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.093662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.093688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.093868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.093895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.094043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.094069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.094221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.094261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.094390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.094416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.094548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.094573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.094720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.094746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.094867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.094893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.095022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.095048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.095191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.095216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.095342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.095368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.095488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.095514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.095644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.095670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.095843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.095869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.599 [2024-07-25 05:54:13.096022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.599 [2024-07-25 05:54:13.096047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.599 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.096198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.096223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.096373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.096398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.096521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.096547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.096721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.096747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.096926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.096951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.097073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.097100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.097297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.097323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.097473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.097499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.097627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.097653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.097793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.097819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.097985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.098011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.098153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.098178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.098302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.098329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.098450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.098477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.098619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.098646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.098797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.098823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.098977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.099003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.099148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.099174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.099295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.099321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.099477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.099507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.099629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.099655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.099832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.099858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.099988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.100015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.100143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.100171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.100350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.100376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.100529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.100555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.100683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.100709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.100881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.100907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.101070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.101096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.101271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.101297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.101450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.101476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.101644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.101670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.101849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.101874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.102048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.102074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.102226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.102257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.102385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.102410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.102558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.600 [2024-07-25 05:54:13.102584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.600 qpair failed and we were unable to recover it. 00:34:19.600 [2024-07-25 05:54:13.102717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.102743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.102890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.102916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.103066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.103092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.103211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.103237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.103395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.103421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.103542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.103568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.103702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.103728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.103857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.103882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.104002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.104027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.104208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.104234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.104389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.104414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.104582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.104608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.104746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.104771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.104922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.104950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.105105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.105133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.105253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.105280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.105407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.105434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.105614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.105641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.105816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.105842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.105961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.105988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.106111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.106137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.106304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.106332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.106474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.106505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.106658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.106685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.106835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.106861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.106973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.107000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.107132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.107159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.107333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.107360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.107546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.107573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.107726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.107754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.107874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.107901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.108030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.108058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.108211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.108238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.108422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.108449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.108623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.108650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.108772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.108799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.108964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.108990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.109115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.601 [2024-07-25 05:54:13.109142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.601 qpair failed and we were unable to recover it. 00:34:19.601 [2024-07-25 05:54:13.109290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.109317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.109462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.109489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.109648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.109676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.109830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.109857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.109976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.110003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.110170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.110197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.110376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.110404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.110548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.110574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.110742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.110769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.110943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.110970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.111149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.111176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.111356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.111384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.111538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.111565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.111683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.111710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.111863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.111889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.112043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.112069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.112216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.112247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.112403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.112431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.112615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.112643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.112787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.112813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.112958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.112985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.113159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.113185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.113362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.113388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.113538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.113566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.113714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.113746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.113861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.113887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.114009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.114036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.114184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.114210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.114369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.114396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.114522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.114549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.114698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.114725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.114877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.114903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.115057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.115085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.115262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.115289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.115436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.115462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.115578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.115605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.115751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.115777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.602 qpair failed and we were unable to recover it. 00:34:19.602 [2024-07-25 05:54:13.115906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.602 [2024-07-25 05:54:13.115933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.116119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.116146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.116330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.116357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.116506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.116532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.116691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.116717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.116865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.116891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.117038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.117064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.117190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.117217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.117380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.117407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.117553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.117580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.117753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.117780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.117956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.117983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.118173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.118200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.118328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.118355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.118514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.118541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.118698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.118725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.118840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.118867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.119049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.119075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.119247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.119275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.119427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.119453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.119635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.119661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.119816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.119842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.119994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.120020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.120198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.120224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.120351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.120378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.120520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.120547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.120706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.120733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.120905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.120940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.121117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.121144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.121293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.121320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.121466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.121493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.121644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.121671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.121819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.121846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.121975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.122002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.603 [2024-07-25 05:54:13.122161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.603 [2024-07-25 05:54:13.122200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.603 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.122376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.122405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.122551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.122578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.122740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.122767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.122914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.122940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.123093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.123120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.123246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.123273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.123407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.123433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.123608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.123634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.123755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.123781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.123905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.123931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.124106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.124133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.124291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.124318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.124437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.124463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.124650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.124677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.124822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.124849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.125010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.125036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.125184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.125211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.125344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.125372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.125550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.125577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.125758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.125784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.125935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.125961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.126112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.126138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.126295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.126322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.126500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.126527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.126653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.126680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.126813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.126839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.126977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.127005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.127161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.127188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.127318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.127345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.127469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.127497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.127646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.127672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.127801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.127829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.127947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.127978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.128133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.128160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.128305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.604 [2024-07-25 05:54:13.128332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.604 qpair failed and we were unable to recover it. 00:34:19.604 [2024-07-25 05:54:13.128508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.128535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.128687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.128714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.128859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.128892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.129017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.129044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.129193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.129219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.129377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.129403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.129528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.129556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.129678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.129704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.129851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.129877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.130001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.130027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.130178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.130204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.130340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.130366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.130514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.130540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.130676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.130704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.130855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.130882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.131029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.131055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.131232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.131265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.131399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.131425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.131573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.131600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.131778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.131804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.131922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.131948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.132064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.132090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.132209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.132235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.132414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.132441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.132609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.132649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.132776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.132804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.132938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.132966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.133121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.133149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.133306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.133334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.133458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.133486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.133641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.133669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.133855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.133882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.134038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.134065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.134212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.134239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.134396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.134424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.134598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.134625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.134748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.134775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.605 [2024-07-25 05:54:13.134892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.605 [2024-07-25 05:54:13.134924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.605 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.135075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.135103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.135254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.135282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.135440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.135466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.135617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.135644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.135824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.135851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.136004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.136032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.136213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.136240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.136429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.136456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.136635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.136662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.136819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.136846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.136977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.137005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.137160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.137187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.137365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.137393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.137577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.137604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.137782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.137809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.137940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.137968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.138146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.138182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.138361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.138389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.138539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.138566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.138720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.138747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.138892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.138919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.139098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.139125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.139311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.139338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.139516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.139543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.139699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.139726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.139873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.139900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.140019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.140047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.140177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.140204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.140351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.140381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.140507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.140534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.140685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.140713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.140864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.140890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.141036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.141063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.141209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.141235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.141392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.141418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.141595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.141621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.141799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.141826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-07-25 05:54:13.142007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-07-25 05:54:13.142033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.142177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.142203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.142364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.142396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.142541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.142570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.142757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.142784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.142930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.142957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.143081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.143108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.143285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.143313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.143468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.143495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.143656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.143684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.143864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.143891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.144066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.144093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.144272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.144300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.144427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.144454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.144608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.144637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.144793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.144820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.144974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.145001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.145160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.145187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.145301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.145329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.145474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.145501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.145676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.145703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.145825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.145852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.146031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.146058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.146211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.146238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.146392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.146419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.146575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.146603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.146779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.146806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.146962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.146989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.147120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.147147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.147331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.147359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.147506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.147533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.147657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.147684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.147829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.147855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.148006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.148033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.148209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.148236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.148406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.148435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.148613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-07-25 05:54:13.148641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-07-25 05:54:13.148765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.148793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.148946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.148972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.149122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.149149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.149303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.149330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.149501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.149528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.149653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.149684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.149864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.149890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.150054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.150081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.150239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.150271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.150399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.150426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.150575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.150601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.150745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.150772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.150894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.150921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.151042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.151070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.151226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.151258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.151380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.151407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.151559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.151586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.151715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.151743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.151892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.151920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.152073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.152100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.152249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.152277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.152399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.152426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.152609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.152635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.152816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.152846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.152970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.152998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.153147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.153174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.153355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.153381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.153533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.153560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.153683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.153709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.153838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.153864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.154036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.154062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-07-25 05:54:13.154235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-07-25 05:54:13.154274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.154446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.154487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.154615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.154643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.154794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.154821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.154997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.155024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.155150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.155178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.155339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.155367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.155516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.155543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.155677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.155704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.155880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.155907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.156083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.156112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.156225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.156257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.156432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.156459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.156614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.156640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.156789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.156820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.156999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.157026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.157182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.157208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.157359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.157386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.157508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.157534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.157689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.157715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.157865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.157892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.158037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.158064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.158240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.158270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.158422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.158448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.158594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.158620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.158772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.158798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.158977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.159003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.159179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.159205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.159395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.159422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.159545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.159572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.159701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.159740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.159856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.159883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.160031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.160057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.160187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.160214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.160378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.160405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.160565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.160593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.160766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.160793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.160941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-07-25 05:54:13.160968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-07-25 05:54:13.161105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.161131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.161293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.161320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.161468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.161495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.161655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.161682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.161856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.161883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.162010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.162036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.162183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.162209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.162346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.162373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.162496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.162528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.162698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.162725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.162867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.162893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.163070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.163096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.163227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.163259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.163404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.163431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.163574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.163600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.163755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.163781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.163935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.163961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.164111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.164138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.164262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.164300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.164483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.164514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.164665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.164692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.164816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.164844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.164971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.164998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.165144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.165171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.165305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.165337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.165485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.165516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.165667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.165694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.165814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.165841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.166004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.166031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.166185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.166212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.166386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.166412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.166538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.166564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.166712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.166739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.166897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.166922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.167054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.167079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.167255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.167294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.167415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.167440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-07-25 05:54:13.167594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-07-25 05:54:13.167621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.167742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.167768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.167894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.167919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.168076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.168101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.168261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.168295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.168445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.168471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.168623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.168654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.168834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.168859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.169014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.169040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.169199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.169225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.169415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.169441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.169560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.169586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.169709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.169735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.169863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.169889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.170038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.170065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.170198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.170224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.170359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.170387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.170537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.170564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.170683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.170708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.170857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.170882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.171032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.171058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.171227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.171257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.171384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.171410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.171567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.171593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.171721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.171746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.171897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.171923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.172079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.172106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.172292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.172319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.172498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.172524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.172667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.172693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.172845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.172871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.173010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.173036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.173187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.173213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.173352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.173378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.173511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.173536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.173689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.173715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.173864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.173890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.174071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.174097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-07-25 05:54:13.174219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-07-25 05:54:13.174250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.174407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.174434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.174613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.174639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.174780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.174806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.174951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.174976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.175126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.175152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.175303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.175329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.175473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.175511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.175661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.175691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.175865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.175890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.176044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.176069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.176213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.176239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.176403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.176429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.176571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.176596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.176754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.176780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.176893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.176918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.177070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.177097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.177248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.177274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.177433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.177459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.177592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.177617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.177769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.177796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.177924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.177950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.178105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.178131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.178282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.178309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.178457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.178483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.178659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.178685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.178812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.178838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.178951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.178978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.179101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.179126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.179276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.179313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.179461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.179488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.179676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.179702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.179829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.179856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.180036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.180063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.180239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.180271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.180459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.180485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.180674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.180699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.180845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-07-25 05:54:13.180871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-07-25 05:54:13.181021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.181049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.181221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.181251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.181410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.181436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.181589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.181614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.181762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.181788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.181961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.181986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.182138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.182165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.182330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.182357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.182479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.182504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.182639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.182664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.182819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.182850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.183000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.183027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.183147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.183173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.183350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.183376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.183497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.183533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.183654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.183681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.183810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.183844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.184028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.184053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.184217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.184248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.184386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.184411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.184573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.184598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.184749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.184782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.184910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.184936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.185086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.185113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.185306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.185333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.185481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.185518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.185671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.185696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.185850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.185876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.186050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.186076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.186201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.186229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.186410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.186436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-07-25 05:54:13.186577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-07-25 05:54:13.186603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.186763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.186790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.186912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.186937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.187087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.187113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.187265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.187296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.187439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.187464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.187621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.187647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.187798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.187824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.187979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.188006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.188156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.188182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.188359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.188387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.188540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.188567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.188715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.188742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.188927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.188953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.189081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.189107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.189288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.189314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.189446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.189472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.189621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.189648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.189772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.189799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.189976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.190006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.190191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.190217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.190394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.190421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.190571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.190598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.190756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.190782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.190911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.190937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.191083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.191110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.191234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.191282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.191431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.191459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.191617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.191643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.191786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.191812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.191940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.191965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.192108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.192134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.192291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.192317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.192475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.192501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.192627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.192652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.192772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.192798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.192927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.192953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.193135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.193161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.193290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.193316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.193440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.193467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.193605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-07-25 05:54:13.193632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-07-25 05:54:13.193785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.193811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.193930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.193956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.194080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.194107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.194287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.194313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.194435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.194461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.194618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.194646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.194768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.194794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.194978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.195004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.195186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.195213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.195386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.195413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.195595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.195633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.195795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.195820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.196007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.196034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.196174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.196200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.196331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.196358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.196510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.196536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.196647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.196674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.196853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.196878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.197035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.197065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.197257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.197283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.197411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.197437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.197586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.197610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.197760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.197786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.197961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.197987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.198139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.198165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.198320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.198346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.198477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.198512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.198672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.198698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.198851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.198877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.199062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.199089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.199276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.199303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.199428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.199454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.199585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.199612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.199738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.199764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.199918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.199945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.200094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.200121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.200250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.200276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.200400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.200428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.200591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.200617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.200774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.200800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-07-25 05:54:13.200955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-07-25 05:54:13.200980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.201154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.201181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.201314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.201341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.201521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.201558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.201688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.201715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.201873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.201899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.202072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.202099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.202252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.202290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.202469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.202505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.202631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.202658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.202797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.202823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.202949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.202976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.203117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.203144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.203263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.203300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.203424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.203451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.203601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.203627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.203788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.203814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.203934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.203960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.204108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.204139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.204298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.204326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.204510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.204536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.204690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.204715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.204870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.204897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.205047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.205075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.205198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.205224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.205385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.205411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.205598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.205624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.205801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.205827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.206001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.206027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.206177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.206204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.206324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.206350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.206508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.206534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.206667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.206693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.206849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.206875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.207000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.207025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.207174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.207200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.207367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.207394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.207516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.207542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.207694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.207720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.207870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.207896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.208014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.208041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-07-25 05:54:13.208217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-07-25 05:54:13.208250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.208383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.208409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.208558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.208584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.208741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.208767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.208891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.208917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.209091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.209117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.209264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.209299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.209455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.209480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.209622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.209648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.209801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.209826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.209985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.210011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.210192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.210219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.210371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.210398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.210547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.210572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.210725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.210752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.210909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.210935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.211089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.211115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.211290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.211320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.211465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.211491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.211663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.211690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.211844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.211869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.212042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.212068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.212193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.212220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.212357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.212383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.212535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.212562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.212753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.212779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.212929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.212955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.213111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.213137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.213298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.213325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.213446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.213474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.213645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.213686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.213852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.213880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.214036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.214063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.214214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.214247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.214410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.214437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-07-25 05:54:13.214565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-07-25 05:54:13.214592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.214767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.214794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.214922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.214949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.215105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.215134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.215300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.215329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.215454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.215482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.215670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.215697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.215847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.215875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.215996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.216023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.216203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.216230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.216379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.216405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.216568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.216596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.216754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.216781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.216961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.216989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.217164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.217191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.217320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.217347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.217498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.217534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.217662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.217689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.217839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.217865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.218041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.218067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.218253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.218280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.218403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.218429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.218584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.218619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.218770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.218797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.218937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.218964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.219148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.219174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.219326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-07-25 05:54:13.219353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-07-25 05:54:13.219506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.219533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.219683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.219710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.219887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.219914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.220087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.220114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.220263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.220301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.220422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.220449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.220595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.220622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.220766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.220793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.220922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.220951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.221109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.221138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.221302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.221331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.221453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.221479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.221653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.221679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.221855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.221882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.222060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.222087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.222266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.222300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.222445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.222470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.222650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.222676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.222828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.222855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.223006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.223036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.223187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.223215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.223399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.223426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.223576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.223603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.223762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.223789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.223940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.223966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.224138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.224165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.224320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.224347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.224492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.224520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.224673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.224699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.224830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.224857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.225042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.225069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.225195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.225221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.225358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.225386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.225564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.225591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.225739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.225765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.225915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.225945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.226099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-07-25 05:54:13.226126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-07-25 05:54:13.226301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.226327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.226450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.226477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.226651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.226692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.226825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.226853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.226974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.227001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.227190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.227217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.227383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.227410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.227543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.227575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.227703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.227731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.227850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.227878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.228022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.228049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.228201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.228229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.228374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.228401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.228554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.228581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.228729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.228756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.228876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.228903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.229050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.229076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.229227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.229260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.229409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.229435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.229579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.229605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.229754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.229781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.229926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.229952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.230083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.230109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.230267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.230306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.230452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.230478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.230632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.230659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.230788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.230814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.230941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.230967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.231129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.231155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.231320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.231347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.231474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.231508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.231651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.231678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.231832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.231859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.232009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.232035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.232162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.232189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.232344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.232371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.232522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.232549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.232697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.232724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-07-25 05:54:13.232856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-07-25 05:54:13.232883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.233033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.233074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.233202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.233231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.233436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.233464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.233620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.233647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.233773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.233799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.233945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.233971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.234091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.234120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.234286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.234314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.234460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.234486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.234648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.234675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.234829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.234857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.235003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.235031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.235176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.235206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.235352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.235380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.235517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.235544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.235674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.235701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.235847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.235874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.236000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.236027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.236145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.236172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.236322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.236349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.236472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.236500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.236626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.236654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.236786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.236813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.236969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.236997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.237119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.237148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.237316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.237343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.237472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.237508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.237667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.237694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.237814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.237841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.237983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.238011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.238172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.238213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.238368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.238397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.238556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.238584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.238762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.238789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.238946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.238973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.239129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.239156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.239296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-07-25 05:54:13.239325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-07-25 05:54:13.239451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.239477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.239633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.239661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.239807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.239834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.239983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.240014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.240147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.240174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.240333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.240359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.240510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.240536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.240681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.240708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.240830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.240858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.240981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.241008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.241135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.241161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.241315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.241342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.241522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.241549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.241694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.241720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.241848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.241875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.242002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.242030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.242152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.242179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.242351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.242392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.242520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.242549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.242671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.242698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.242880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.242906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.243035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.243062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.243218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.243258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.243414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.243443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.243597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.243629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.243757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.243786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.243940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.243967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.244098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.244153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.244317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.244368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.244501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.244529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.622 [2024-07-25 05:54:13.244650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.622 [2024-07-25 05:54:13.244682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.622 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.244837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.244864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.244983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.245009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.245161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.245188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.245317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.245343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.245473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.245505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.245632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.245659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.245778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.245804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.245930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.245957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.246094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.246120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.246289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.246315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.246446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.246474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.246618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.246644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.246795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.246822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.246954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.246981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.247097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.247124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.623 [2024-07-25 05:54:13.247278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.623 [2024-07-25 05:54:13.247306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.623 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.247427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.247453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.247585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.247621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.247764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.247801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.247960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.247987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.248120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.248147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.248279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.248306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.248439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.248465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.248604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.248630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.248775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.248816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.248949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.248980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.249130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.249172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.249310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.249339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.249463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.249490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.249637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.249665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.249801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.249828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.249980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.250007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.250128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.250157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.250318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.250346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.250489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.250518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.250646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.250672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.250789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.250816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.250938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.250964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.251114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.251141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.251265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.251307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.251426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.251452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.251582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.251610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.251738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.251764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.251887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.251913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.252043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.252070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.252197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.252227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.252385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.252424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.252573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 05:54:13.252602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 05:54:13.252736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.252763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.252897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.252924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.253067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.253094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.253222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.253256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.253390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.253418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.253576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.253604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.253725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.253752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.253881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.253908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.254035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.254062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.254191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.254220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.254366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.254392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.254554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.254582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.254727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.254754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.254877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.254904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.255034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.255062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.255211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.255238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.255391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.255418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.255577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.255605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.255723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.255754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.255903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.255930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.256066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.256093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.256220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.256253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.256412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.256439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.256572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.256599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.256780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.256807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.256936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.256964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.257096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.257123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.257263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.257294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.257419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.257445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.257576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.257604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.257753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.257780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.257901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.257928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.258098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.258125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.258295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.258322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.258451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.258477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.258625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.258652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.258819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.258845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.258960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.258987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.259124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.259150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.259278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.259307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.259447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.259473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.259598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.259625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.259749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.259775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.259930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.259957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.260079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.260107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.260260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.260291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 05:54:13.260419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 05:54:13.260446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.260600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.260627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.260781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.260807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.260953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.260979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.261105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.261131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.261249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.261287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.261416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.261442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.261578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.261605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.261717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.261744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.261897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.261924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.262085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.262112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.262248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.262286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.262420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.262466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.262636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.262665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.262789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.262816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.262965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.262992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.263144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.263172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.263322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.263349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.263472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.263507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.263633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.263659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.263783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.263810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.263937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.263965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.264101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.264142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.264265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.264304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.264437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.264464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.264627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.264654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.264787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.264815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.264944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.264971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.265101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.265129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.265294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.265322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.265453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.265480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.265662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.265689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.265828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.265855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.265982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.266011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.266192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.266221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.266370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.266410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.266578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.266618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.266773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.266802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.266953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.266981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.267130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.267160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.267314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.267342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.267458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.267485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.267618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.267645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.267797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.267823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.267943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.267970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.268123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.268150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.268301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.268328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 05:54:13.268450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 05:54:13.268477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.268617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.268644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.268822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.268848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.268964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.268990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.269115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.269142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.269269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.269297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.269437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.269463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.269618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.269644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.269787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.269813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.269939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.269966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.270084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.270110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.270234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.270267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.270400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.270429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.270575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.270615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.270774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.270803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.270955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.270981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.271130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.271157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.271289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.271316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.271442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.271470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.271633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.271666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.271805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.271832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.271955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.271983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.272138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.272166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.272299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.272326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.272457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.272484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.272614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.272641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.272821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.272848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.273003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.273030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.273180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.273207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.273340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.273368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.273483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.273510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.273635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.273663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.273814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.273840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.273966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.273993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.274142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.274169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.274340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.274381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.274512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.274541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.274701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.274728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.274855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.274882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.275011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.275040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.275186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.275212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.275352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.275380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.275503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.275530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.275677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.275704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 05:54:13.275853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 05:54:13.275880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.276031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.276058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.276193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.276234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.276409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.276438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.276588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.276616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.276741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.276768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.276891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.276918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.277039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.277065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.277220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.277252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.277402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.277429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.277556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.277582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.277718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.277745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.277870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.277897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.278046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.278073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.278233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.278268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.278451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.278478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.278661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.278689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.278817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.278843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.278968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.278997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.279128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.279156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.279308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.279336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.279461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.279489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.279643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.279671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.279800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.279828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.280047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.280074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.280196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.280224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.280384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.280411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.280541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.280567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.280691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.280718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.280896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.280923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.281044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.281070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.281267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.281307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.281443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.281472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.281621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.281648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.281801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.281828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.281980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.282007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.282127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.282155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.282301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.282328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.282454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.282481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.282642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.282668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.282788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.282815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 05:54:13.282927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 05:54:13.282954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.283110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.283143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.283282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.283309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.283457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.283484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.283636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.283662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.283786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.283814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.283934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.283961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.284112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.284138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.284299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.284328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.284510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.284538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.284667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.284695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.284825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.284852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.285004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.285030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.285157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.285183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.285322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.285350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.285483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.285510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.285656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.285683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.285829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.285856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.286018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.286046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.286213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.286263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.286427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.286457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.286584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.286612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.286742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.286770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.286925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.286952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.287109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.287149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.287297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.287326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.287447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.287474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.287603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.287631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.287752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.287779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.287903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.287930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.288084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.288111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.288260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.288300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.288459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.288487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.288635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.288662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.288811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.288838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.288984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.289010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.289167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.289193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.289332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.289360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.289508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.289535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.289691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.289718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.289871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.289898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.290024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.290051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.290210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.290237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.290375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.290401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.290519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.290546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.290697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.290724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 05:54:13.290844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 05:54:13.290870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.290993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.291019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.291169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.291202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.291341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.291368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.291493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.291520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.291633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.291659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.291777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.291804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.291950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.291976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.292131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.292157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.292299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.292344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.292492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.292522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.292647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.292674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.292830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.292857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.293011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.293037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.293158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.293184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.293336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.293364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.293498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.293525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.293642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.293669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.293847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.293874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.293987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.294013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.294164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.294190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.294323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.294352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.294541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.294568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.294704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.294730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.294893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.294920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.295044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.295071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.295224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.295257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.295383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.295409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.295558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.295585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.295707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.295734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.295871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.295897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.296029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.296056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.296172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.296199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.296368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.296395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.296510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.296536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.296660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.296688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.296848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.296875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.297031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.297058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.297190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.297217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.297365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.297407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.297551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.297591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.297764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.297792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.297922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.297950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.298077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.298104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.298229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.298262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.298426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.298454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.298594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 05:54:13.298635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 05:54:13.298776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.298804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.298979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.299006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.299150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.299182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.299315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.299342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.299493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.299519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.299659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.299686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.299807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.299834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.299975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.300003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.300122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.300148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.300303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.300331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.300481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.300508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.300659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.300686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.300805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.300832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.300956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.300982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.301102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.301129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.301261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.301288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.301421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.301450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.301582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.301609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.301763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.301790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.301917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.301943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.302065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.302093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.302256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.302297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.302459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.302486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.302614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.302642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.302766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.302793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.302941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.302967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.303098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.303125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.303258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.303286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.303409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.303436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.303562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.303590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.303716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.303743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.303867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.303894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.304062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.304102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.304279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.304307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.304449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.304476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.304609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.304637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.304792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.304821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.304960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.304987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.305114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.305141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.305295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.305323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.305450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.305477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.305625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 05:54:13.305652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 05:54:13.305777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.305804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.305963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.305990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.306117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.306144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.306278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.306308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.306437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.306464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.306582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.306609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.306751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.306779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.306901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.306927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.307042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.307069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.307218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.307253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.307382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.307411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.307564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.307591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.307739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.307766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.307917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.307943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.308089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.308120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.308256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.308284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.308415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.308441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.308560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.308586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.308762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.308788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.308941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.308968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.309094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.309122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.309260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.309288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.309445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.309472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.309590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.309617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.309774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.309803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.309958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.309985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.310136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.310164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.310295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.310323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.310449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.310476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.310601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.310628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.310775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.310802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.310923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.310950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.311108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.311134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.311288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.311315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.311479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.311506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.311633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.311662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.311843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.311872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.312027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.312053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.312179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.312208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.312361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.312388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.312504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.312531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.312688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.312715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.312876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.312904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.313048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.313075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.313222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.313268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.313427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.313455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.313577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 05:54:13.313605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 05:54:13.313736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.313763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.313903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.313931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.314097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.314123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.314297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.314324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.314452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.314479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.314628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.314654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.314773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.314800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.314922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.314955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.315082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.315109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.315248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.315275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.315438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.315465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.315620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.315648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.315801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.315828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.315978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.316005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.316156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.316183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.316331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.316358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.316504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.316531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.316652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.316679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.316808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.316836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.316963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.316989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.317137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.317164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.317291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.317319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.317460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.317487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.317617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.317643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.317773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.317800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.317949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.317976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.318147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.318174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.318307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.318335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.318501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.318541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.318681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.318709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.318844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.318871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.318995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.319022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.319169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.319195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.319357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.319384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.319510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.319538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.319671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.319699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.319817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.319844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.319966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.319993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.320114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.320142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.320306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.320333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.320489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.320516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.320671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.320698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.320827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.320853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.320975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.321002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.321148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.321174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.321334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.321362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.321485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 05:54:13.321511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 05:54:13.321639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.321666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.321798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.321827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.321978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.322005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.322138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.322165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.322308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.322349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.322485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.322514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.322666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.322693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.322845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.322872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.322992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.323018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.323143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.323170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.323321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.323349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.323477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.323504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.323682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.323709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.323864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.323890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.324046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.324076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.324230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.324264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.324399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.324426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.324552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.324580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.324733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.324763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.324918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.324945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.325082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.325115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.325255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.325282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.325431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.325464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.325592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.325619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.325745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.325771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.325917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.325944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.326090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.326144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.326305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.326338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.326466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.326492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.326643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.326669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.326853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.326930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.327115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.327143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.327294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.327323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.327452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.327479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.327606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.327632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.327770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.327797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.327966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.328006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.328147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.328177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.328315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.328344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.328500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.328527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.328677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.328704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.328836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.328863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.329006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.329035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.329174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.329215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.329356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.329386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.329542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.329571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.916 [2024-07-25 05:54:13.329693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.916 [2024-07-25 05:54:13.329720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.916 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.329885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.329912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.330060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.330089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.330253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.330280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.330414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.330441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.330608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.330635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.330764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.330791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.330946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.330973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.331119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.331149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.331314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.331355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.331540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.331569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.331721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.331748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.331875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.331902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.332027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.332054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.332228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.332265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.332396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.332423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.332574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.332600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.332724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.332751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.332930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.332957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.333087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.333114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.333250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.333277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.333397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.333430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.333589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.333630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.333791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.333820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.333952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.333980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.334108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.334135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.334303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.334344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.334505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.334533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.334688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.334715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.334889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.334916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.335036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.335063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.335200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.335227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.335364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.335393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.335524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.335552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.335689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.335716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.335848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.335874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.335999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.336026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.336149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.336176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.336291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.336318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.336448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.336476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.336614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.336641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.336766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.336793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.917 qpair failed and we were unable to recover it. 00:34:19.917 [2024-07-25 05:54:13.336922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.917 [2024-07-25 05:54:13.336950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.337082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.337110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.337299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.337326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.337451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.337479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.337636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.337664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.337789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.337819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.337954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.337986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.338114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.338141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.338293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.338320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.338472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.338499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.338649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.338678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.338834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.338862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.338983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.339010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.339159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.339186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.339308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.339336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.339460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.339487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.339609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.339636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.339792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.339819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.339960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.339987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.340109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.340136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.340294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.340325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.340453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.340480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.340604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.340632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.340750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.340777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.340898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.340926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.341080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.341107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.341222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.341262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.341406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.341433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.341581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.341608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.341727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.341754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.341879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.341905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.342031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.342058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.342175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.342201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.342336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.342363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.342479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.342506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.342625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.342652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.342802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.342829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.342990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.343017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.343150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.343177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.343304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.343332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.343479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.343505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.343631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.343658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.343834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.343861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.344012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.344038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.344164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.344193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.344325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.344353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.918 [2024-07-25 05:54:13.344534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.918 [2024-07-25 05:54:13.344566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.918 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.344696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.344723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.344881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.344908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.345069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.345097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.345255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.345283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.345438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.345465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.345620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.345647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.345764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.345792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.345969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.345996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.346147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.346175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.346346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.346373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.346529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.346557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.346706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.346733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.346888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.346915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.347104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.347131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.347305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.347333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.347492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.347519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.347638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.347665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.347826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.347853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.348007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.348034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.348154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.348181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.348324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.348383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.348519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.348547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.348694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.348721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.348842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.348869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.348989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.349015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.349161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.349188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.349332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.349360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.349559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.349598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.349732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.349760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.349913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.349940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.350069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.350096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.350253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.350281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.350434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.350461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.350580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.350607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.350734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.350761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.350912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.350938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.351114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.351140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.351293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.351322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.351447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.351474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.351656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.351691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.351816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.351843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.352010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.352037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.352186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.352213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.352374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.352401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.352525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.352551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.352704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.352731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.919 qpair failed and we were unable to recover it. 00:34:19.919 [2024-07-25 05:54:13.352856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.919 [2024-07-25 05:54:13.352883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.353031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.353059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.353187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.353213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.353361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.353388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.353516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.353543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.353693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.353719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.353867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.353894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.354027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.354056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.354213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.354246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.354375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.354402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.354544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.354571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.354701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.354728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.354856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.354882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.355030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.355058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.355203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.355230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.355366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.355394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.355513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.355540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.355659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.355686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.355804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.355832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.355956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.355983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.356103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.356129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.356281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.356309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.356430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.356457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.356586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.356613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.356728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.356755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.356922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.356949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.357069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.357095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.357217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.357249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.357404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.357431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.357560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.357587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.357764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.357791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.357942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.357969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.358095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.358122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.358253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.358280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.358397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.358428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.358572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.358599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.358775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.358802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.358930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.358957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.359138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.359165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.359328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.359356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.359520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.359548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.359672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.359699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.359847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.359876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.360002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.360030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.360176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.360203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.360363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.360391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.920 [2024-07-25 05:54:13.360543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.920 [2024-07-25 05:54:13.360570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.920 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.360692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.360719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.360871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.360898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.361046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.361073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.361199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.361226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.361391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.361418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.361543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.361570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.361686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.361714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.361871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.361898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.362059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.362086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.362247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.362276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.362432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.362460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.362607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.362634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.362760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.362787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.362939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.362966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.363140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.363166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.363305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.363332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.363477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.363503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.363663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.363690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.363837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.363864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.363995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.364022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.364170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.364197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.364327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.364355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.364499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.364526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.364672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.364699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.364821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.364849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.365027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.365053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.365204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.365232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.365385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.365412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.365535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.365562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.365716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.365742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.365868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.365895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.366021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.366047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.366203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.366230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.366358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.366385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.366510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.366537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.366658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.366687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.366837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.366866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.366995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.367022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.367172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.367199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.367356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.921 [2024-07-25 05:54:13.367384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.921 qpair failed and we were unable to recover it. 00:34:19.921 [2024-07-25 05:54:13.367505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.367532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.367677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.367704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.367831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.367858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.368009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.368036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.368155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.368182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.368346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.368373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.368498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.368525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.368698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.368725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.368851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.368878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.369003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.369030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.369182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.369209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.369368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.369395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.369536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.369564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.369673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.369699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.369863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.369890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.370055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.370086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.370235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.370269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.370398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.370425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.370578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.370604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.370728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.370755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.370878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.370905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.371059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.371087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.371238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.371271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.371392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.371419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.371574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.371601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.371733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.371760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.371902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.371929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.372058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.372086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.372266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.372294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.372444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.372471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.372592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.372619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.372769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.372796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.372944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.372971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.373120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.373147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.373299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.373326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.373474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.373500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.373652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.373679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.373824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.373851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.373998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.374024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.374158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.374186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.374361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.374389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.922 qpair failed and we were unable to recover it. 00:34:19.922 [2024-07-25 05:54:13.374508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.922 [2024-07-25 05:54:13.374535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.374660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.374687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.374869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.374895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.375022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.375050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.375211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.375238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.375413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.375439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.375592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.375619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.375772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.375799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.375944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.375971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.376114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.376140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.376295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.376322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.376471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.376498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.376627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.376653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.376810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.376837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.376971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.376997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.377175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.377202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.377362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.377388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.377509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.377536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.377693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.377720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.377896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.377922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.378043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.378070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.378190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.378217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.378356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.378383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.378530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.378557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.378683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.378710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.378823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.378850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.379004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.379031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.379174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.379200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.379369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.379396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.379527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.379553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.379678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.379705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.379876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.379903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.380023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.380049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.380224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.380258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.380453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.380480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.380599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.380625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.380756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.380782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.380907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.380934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.923 [2024-07-25 05:54:13.381062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.923 [2024-07-25 05:54:13.381089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.923 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.381235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.381268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.381447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.381474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.381629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.381656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.381836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.381866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.381997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.382024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.382181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.382208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.382340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.382369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.382548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.382575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.382728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.382755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.382923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.382950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.383083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.383110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.383237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.383269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.383442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.383468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.383618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.383645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.383823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.383849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.383968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.383995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.384120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.384147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.384345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.384372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.384493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.384520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.384665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.384692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.384842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.384869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.385016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.385042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.385192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.385220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.385367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.385407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.385546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.385575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.385709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.385737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.385884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.385912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.386064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.386091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.386210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.386237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.386397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.386424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.386536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.386568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.386701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.386728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.386890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.386917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.387046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.387073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.387200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.387229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.387408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.387448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.387614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.387641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.387786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.924 [2024-07-25 05:54:13.387813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.924 qpair failed and we were unable to recover it. 00:34:19.924 [2024-07-25 05:54:13.387946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.387974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.388150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.388177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.388334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.388362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.388524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.388551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.388704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.388732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.388885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.388912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.389071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.389099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.389270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.389306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.389455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.389483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.389654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.389681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.389826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.389854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.389976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.390004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.390164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.390205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.390375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.390404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.390604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.390631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.390787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.390814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.390936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.390964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.391120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.391147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.391281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.391308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.391436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.391469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.391663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.391691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.391843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.391870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.392017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.392044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.392192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.925 [2024-07-25 05:54:13.392219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.925 qpair failed and we were unable to recover it. 00:34:19.925 [2024-07-25 05:54:13.392401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.926 [2024-07-25 05:54:13.392442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.926 qpair failed and we were unable to recover it. 00:34:19.926 [2024-07-25 05:54:13.392614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.926 [2024-07-25 05:54:13.392642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.926 qpair failed and we were unable to recover it. 00:34:19.926 [2024-07-25 05:54:13.392789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.926 [2024-07-25 05:54:13.392816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.926 qpair failed and we were unable to recover it. 00:34:19.926 [2024-07-25 05:54:13.392940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.926 [2024-07-25 05:54:13.392968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.926 qpair failed and we were unable to recover it. 00:34:19.926 [2024-07-25 05:54:13.393123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.926 [2024-07-25 05:54:13.393149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.926 qpair failed and we were unable to recover it. 00:34:19.926 [2024-07-25 05:54:13.393302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.926 [2024-07-25 05:54:13.393329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.926 qpair failed and we were unable to recover it. 00:34:19.926 [2024-07-25 05:54:13.393480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.393511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.393663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.393690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.393843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.393869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.393991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.394018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.394191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.394218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.394376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.394416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.394550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.394580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.394701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.394729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.394879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.394906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.395058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.395086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.395249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.395276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.395427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.395453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.395603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.395630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.395793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.395820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.395997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.396024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.396174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.396201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.396342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.396374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.396522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.396548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.396683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.396711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.396853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.396880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.397027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.397054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.397206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.397233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.397372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.397398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.397553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.397579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.397810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.397837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.398010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.398037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.398160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.398188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.398416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.398443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.398681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.398708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.398891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.398918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.399098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.399126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.399262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.399293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.399444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.399470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.399627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.399654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.399800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.399827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.400005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.400031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.400205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.927 [2024-07-25 05:54:13.400233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.927 qpair failed and we were unable to recover it. 00:34:19.927 [2024-07-25 05:54:13.400417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.400444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.400670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.400697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.400843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.400871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.401005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.401033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.401159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.401186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.401306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.401334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.401457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.401484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.401609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.401636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.401770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.401797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.401951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.401977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.402127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.402153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.402287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.402314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.402428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.402455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.402590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.402617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.402740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.402767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.402886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.402913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.403033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.403060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.403208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.403236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.403413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.403440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.403587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.403618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.403795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.403823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.403980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.404006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.404161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.404188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.404319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.404346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.404495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.404521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.404669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.404695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.404840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.404867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.405040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.405066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.405220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.405253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.405374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.405401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.405578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.405604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.405746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.405773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.405949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.405976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.406129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.406156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.406292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.406319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.406438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.406464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.406619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.406646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.928 [2024-07-25 05:54:13.406794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.928 [2024-07-25 05:54:13.406821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.928 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.406981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.407007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.407128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.407154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.407286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.407313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.407443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.407469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.407600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.407627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.407746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.407774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.407921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.407947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.408105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.408133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.408321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.408348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.408506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.408533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.408686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.408713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.408888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.408914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.409040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.409068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.409193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.409221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.409351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.409377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.409530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.409557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.409681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.409708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.409884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.409910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.410061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.410087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.410237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.410277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.410455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.410482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.410632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.410663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.410822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.410849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.411023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.411050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.411199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.411225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.411360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.411387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.411546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.411574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.411728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.411754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.411930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.411957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.412080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.412108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.412229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.412262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.412429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.412456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.412606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.412632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.412784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.412811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.412967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.412993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.413152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.413178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.413325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.929 [2024-07-25 05:54:13.413353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.929 qpair failed and we were unable to recover it. 00:34:19.929 [2024-07-25 05:54:13.413527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.413554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.413683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.413709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.413938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.413966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.414088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.414115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.414341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.414368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.414551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.414577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.414728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.414754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.414904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.414930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.415052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.415080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.415232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.415263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.415407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.415433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.415584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.415610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.415748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.415775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.415910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.415937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.416114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.416140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.416291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.416318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.416439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.416466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.416620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.416647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.416821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.416847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.417004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.417031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.417204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.417230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.417358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.417385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.417533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.417560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.417683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.417709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.417828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.417858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.418016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.418042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.418192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.418218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.418406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.418433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.418607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.418633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.418760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.418786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.418945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.418971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.419103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.419130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.419292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.419319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.419474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.419501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.419688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.419714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.419864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.419890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.420046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.420072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.420226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.930 [2024-07-25 05:54:13.420265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.930 qpair failed and we were unable to recover it. 00:34:19.930 [2024-07-25 05:54:13.420385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.420411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.420564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.420590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.420722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.420749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.420921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.420948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.421095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.421121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.421245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.421273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.421448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.421474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.421622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.421648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.421778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.421804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.421927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.421953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.422086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.422113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.422258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.422285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.422437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.422464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.422622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.422649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.422826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.422853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.422965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.422991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.423117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.423144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.423305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.423332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.423447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.423474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.423593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.423620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.423765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.423791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.423966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.423993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.424147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.424174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.424314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.424341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.424496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.424522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.424640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.424666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.424813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.424844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.424992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.425018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.425169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.425195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.425351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.425377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.425553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.425579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.425728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.425755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.931 [2024-07-25 05:54:13.425909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.931 [2024-07-25 05:54:13.425935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.931 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.426086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.426112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.426262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.426289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.426439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.426465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.426609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.426635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.426753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.426780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.426905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.426932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.427083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.427109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.427286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.427313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.427541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.427567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.427725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.427753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.427903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.427930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.428085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.428112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.428239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.428271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.428425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.428452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.428627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.428653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.428829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.428855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.428996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.429022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.429200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.429226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.429378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.429404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.429583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.429610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.429739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.429766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.429912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.429939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.430075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.430101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.430251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.430277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.430425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.430451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.430563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.430591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.430718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.430744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.430925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.430952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.431117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.431144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.431270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.431298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.431425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.431453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.431631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.431658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.431777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.431803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.431957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.431988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.432139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.432166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.432314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.432341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.432486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.432513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.932 qpair failed and we were unable to recover it. 00:34:19.932 [2024-07-25 05:54:13.432639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.932 [2024-07-25 05:54:13.432666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.432835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.432861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.432995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.433022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.433199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.433226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.433349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.433376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.433551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.433577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.433694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.433721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.433849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.433875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.434039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.434066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.434211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.434237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.434361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.434388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.434560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.434587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.434733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.434760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.434881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.434907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.435053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.435079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.435307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.435334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.435488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.435515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.435743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.435770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.435918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.435944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.436098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.436124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.436249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.436276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.436451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.436477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.436659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.436685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.436812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.436839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.437017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.437044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.437195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.437222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.437364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.437391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.437542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.437569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.437715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.437742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.437886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.437912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.438060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.438086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.438262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.438290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.438441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.438468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.438619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.438646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.438778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.438804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.438983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.439009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.439134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.439164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.439289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.933 [2024-07-25 05:54:13.439316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.933 qpair failed and we were unable to recover it. 00:34:19.933 [2024-07-25 05:54:13.439492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.439519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.439635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.439662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.439840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.439866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.440018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.440044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.440219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.440250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.440376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.440402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.440631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.440656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.440833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.440860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.441013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.441040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.441265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.441291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.441469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.441495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.441623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.441649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.441777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.441804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.441981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.442007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.442128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.442156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.442318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.442345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.442473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.442499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.442648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.442674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.442787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.442813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.443041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.443067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.443219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.443257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.443406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.443433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.443581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.443607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.443834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.443860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.444036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.444062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.444220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.444252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.444409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.444435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.444590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.444616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.444744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.444771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.444906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.444933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.445083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.445109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.445229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.445261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.445409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.445435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.445564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.445590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.445710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.445736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.445865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.445893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.446066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.446093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.446228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.934 [2024-07-25 05:54:13.446259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.934 qpair failed and we were unable to recover it. 00:34:19.934 [2024-07-25 05:54:13.446440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.446470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.446595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.446623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.446773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.446799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.446912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.446938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.447082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.447109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.447235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.447272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.447427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.447453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.447583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.447609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.447753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.447779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.447930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.447956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.448083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.448109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.448224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.448256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.448406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.448433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.448580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.448606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.448755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.448781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.448928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.448955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.449100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.449127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.449275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.449302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.449451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.449479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.449608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.449634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.449791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.449818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.449947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.449974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.450130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.450156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.450284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.450311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.450485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.450512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.450628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.450655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.450793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.450820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.450967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.450993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.451117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.451143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.451255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.451282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.451435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.451463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.451584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.451611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.451731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.451758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.451935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.451962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.452108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.452134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.935 qpair failed and we were unable to recover it. 00:34:19.935 [2024-07-25 05:54:13.452292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.935 [2024-07-25 05:54:13.452329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.452503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.452530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.452677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.452704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.452884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.452910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.453086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.453113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.453286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.453317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.453458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.453484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.453628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.453654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.453828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.453855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.454006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.454032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.454204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.454230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.454378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.454405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.454568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.454594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.454728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.454754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.454906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.454933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.455053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.455079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.455247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.455274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.455425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.455452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.455611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.455637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.455765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.455792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.455946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.455972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.456122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.456148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.456312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.456339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.456514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.456541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.456662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.456689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.456834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.456861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.456976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.457003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.457156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.457183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.457339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.457366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.457493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.457519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.457692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.457719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.457861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.457887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.458051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.458078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.458225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.458257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.458380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.458407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.936 [2024-07-25 05:54:13.458586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.936 [2024-07-25 05:54:13.458613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.936 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.458764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.458792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.458942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.458968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.459084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.459110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.459265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.459293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.459447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.459474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.459807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.459833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.459961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.459987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.460110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.460136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.460266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.460293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.460415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.460445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.460674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.460701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.460846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.460873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.460994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.461020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.461176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.461202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.461388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.461414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.461559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.461585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.461736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.461763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.461892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.461918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.462071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.462097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.462250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.462277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.462398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.462425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.462574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.462600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.462747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.462774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.462891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.462917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.463067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.463094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.463215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.463246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.463398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.463425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.463565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.463591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.463819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.463845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.464071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.464098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.464265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.464292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.464438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.464464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.464597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.464623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.464744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.464771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.464924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.464951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.465108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.465134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.937 [2024-07-25 05:54:13.465262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.937 [2024-07-25 05:54:13.465290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.937 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.465463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.465490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.465641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.465668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.465823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.465850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.465972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.465999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.466140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.466167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.466297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.466325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.466476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.466502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.466629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.466656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.466790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.466817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.466947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.466975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.467108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.467135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.467313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.467340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.467565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.467595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.467822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.467849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.468000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.468027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.468255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.468282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.468509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.468536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.468682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.468708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.468830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.468856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.469000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.469027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.469182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.469209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.469373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.469401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.469550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.469577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.469705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.469731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.469852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.469878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.470035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.470062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.470249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.470276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.470400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.470427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.470603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.470629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.470778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.470805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.470961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.470987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.471166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.471193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.471343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.471370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.471519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.471546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.471695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.471722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.471875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.471902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.472057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.472083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.938 [2024-07-25 05:54:13.472236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.938 [2024-07-25 05:54:13.472274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.938 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.472430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.472457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.472612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.472638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.472792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.472819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.472973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.473000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.473172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.473199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.473355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.473381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.473526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.473552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.473694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.473720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.473872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.473898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.474050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.474076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.474199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.474227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.474380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.474407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.474556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.474582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.474735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.474761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.474892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.474918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.475076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.475104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.475225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.475256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.475485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.475512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.475741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.475769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.475936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.475963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.476137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.476163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.476287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.476314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.476434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.476462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.476615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.476641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.476814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.476840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.476987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.477014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.477161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.477187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.477341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.477368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.477524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.477550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.477709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.477737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.477854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.477881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.478058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.478084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.478249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.478276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.478445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.478471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.478645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.478672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.478792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.478819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.478970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.478998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.939 qpair failed and we were unable to recover it. 00:34:19.939 [2024-07-25 05:54:13.479150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.939 [2024-07-25 05:54:13.479179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.479302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.479329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.479456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.479483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.479644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.479671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.479830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.479863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.480010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.480036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.480225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.480260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.480394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.480421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.480539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.480567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.480700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.480727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.480857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.480885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.481007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.481035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.481168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.481196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.481332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.481360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.481501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.481529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.481682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.481709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.481880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.481907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.482038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.482064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.482231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.482266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.482392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.482426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.482558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.482585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.482743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.482770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.482898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.482924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.483042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.483070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.483218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.483251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.483393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.483420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.483549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.483577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.483711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.483745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.483902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.483934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.484071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.484099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.484232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.484280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.484435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.484463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.484612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.484639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.484796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.484828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.485015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.485042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.940 [2024-07-25 05:54:13.485193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.940 [2024-07-25 05:54:13.485221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.940 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.485385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.485417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.485572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.485600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.485733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.485766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.485924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.485951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.486095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.486122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.486270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.486298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.486454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.486482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.486631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.486658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.486818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.486848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.487010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.487038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.487160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.487188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.487319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.487349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.487488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.487516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.487649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.487680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.487832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.487860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.487994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.488021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.488184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.488210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.488346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.488377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.488535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.488562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.488704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.488731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.488897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.488924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.489057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.489084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.489213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.489249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.489390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.489418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.489572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.489607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.489734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.489767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.489925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.489953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.490067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.490094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.490252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.490281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.490397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.490422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.490576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.490603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.490760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.490794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.490927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.490955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.491100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.491128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.491308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.491336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.491508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.491536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.491664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.491691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.491865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.491899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.492030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.492057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.492207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.492234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.941 [2024-07-25 05:54:13.492396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.941 [2024-07-25 05:54:13.492423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.941 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.492578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.492605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.492788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.492815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.492937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.492965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.493095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.493122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.493247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.493281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.493442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.493470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.493615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.493642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.493769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.493808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.493973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.494001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.494152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.494179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.494307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.494334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.494508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.494534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.494668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.494694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.494843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.494869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.494996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.495022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.495152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.495183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.495338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.495368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.495531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.495559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.495715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.495744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.495987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.496015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.496248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.496276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.496426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.496458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.496694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.496721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.496852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.496879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.497052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.497080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.497207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.497233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.497390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.497426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.497581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.497619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.497763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.497791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.497936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.497962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.498089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.498115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.498227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.498260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.498383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.498410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.498561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.498588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.498741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.498767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.498902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.498928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.499057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.499084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.499230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.499263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.499413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.499439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.499559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.499587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.942 [2024-07-25 05:54:13.499717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.942 [2024-07-25 05:54:13.499743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.942 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.499893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.499919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.500147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.500173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.500302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.500329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.500485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.500511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.500635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.500661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.500817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.500843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.500966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.500997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.501139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.501166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.501322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.501350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.501498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.501524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.501702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.501728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.501905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.501931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.502079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.502106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.502228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.502264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.502386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.502413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.502560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.502586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.502705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.502732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.502855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.502882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.503059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.503086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.503236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.503270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.503433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.503460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.503586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.503612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.503765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.503791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.503941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.503967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.504091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.504117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.504270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.504297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.504420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.504446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.504601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.504627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.504781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.504807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.504956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.504982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.505128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.505154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.505291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.505318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.505467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.505495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.505622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.505649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.505774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.505801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.505956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.505983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.506111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.506139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.506291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.506319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.506473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.506500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.506656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.506684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.943 qpair failed and we were unable to recover it. 00:34:19.943 [2024-07-25 05:54:13.506801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.943 [2024-07-25 05:54:13.506828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.506983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.507009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.507183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.507210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.507366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.507393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.507527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.507554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.507677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.507705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.507854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.507885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.508043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.508069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.508223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.508256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.508413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.508439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.508556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.508583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.508740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.508766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.508905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.508931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.509079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.509107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.509227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.509260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.509383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.509409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.509586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.509613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.509740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.509767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.509920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.509946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.510092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.510118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.510270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.510297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.510445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.510472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.510596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.510625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.510753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.510780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.510901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.510928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.511082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.511110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.511251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.511278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.511405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.511432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.511558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.511586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.511764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.511791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.511912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.511939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.512098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.512126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.512273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.512301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.512440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.512467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.512627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.512654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.512827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.944 [2024-07-25 05:54:13.512854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.944 qpair failed and we were unable to recover it. 00:34:19.944 [2024-07-25 05:54:13.513008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.513034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.513185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.513211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.513362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.513389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.513516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.513543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.513676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.513704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.513819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.513846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.513994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.514020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.514171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.514197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.514327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.514355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.514483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.514510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.514661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.514692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.514852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.514878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.515054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.515080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.515235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.515267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.515399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.515425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.515545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.515572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.515721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.515749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.515879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.515905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.516053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.516079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.516204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.516230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.516360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.516386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.516566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.516593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.516719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.516745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.516910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.516937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.517118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.517145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.517271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.517298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.517474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.517500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.517622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.517649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.517770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.517797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.517951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.517977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.518151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.518178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.518337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.518363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.518481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.518508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.518631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.518658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.518836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.518863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.518977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.519003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.519122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.519148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.519298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.519325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.519475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.519502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.519621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.519647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.519780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.519806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.519959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.519986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.520114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.945 [2024-07-25 05:54:13.520140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.945 qpair failed and we were unable to recover it. 00:34:19.945 [2024-07-25 05:54:13.520297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.520324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.520445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.520470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.520632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.520658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.520782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.520808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.520926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.520952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.521099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.521125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.521270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.521297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.521455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.521487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.521642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.521668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.521795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.521823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.521946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.521973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.522121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.522147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.522324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.522351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.522504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.522530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.522683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.522710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.522875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.522901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.523053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.523080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.523204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.523232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.523401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.523428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.523568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.523594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.523725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.523752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.523897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.523925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.524071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.524097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.524224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.524266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.524421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.524447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.524571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.524599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.524754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.524781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.524895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.524921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.525071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.525099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.525273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.525300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.525451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.525478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.525599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.525626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.525780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.525807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.525958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.525985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.526129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.526156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.526337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.526364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.526540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.526567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.526715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.526742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.526888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.526915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.527044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.527070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.527223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.527256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.527387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.527414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.527536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.527563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.527723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.946 [2024-07-25 05:54:13.527750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.946 qpair failed and we were unable to recover it. 00:34:19.946 [2024-07-25 05:54:13.527870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.527896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.528049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.528075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.528224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.528257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.528386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.528418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.528543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.528570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.528691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.528718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.528862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.528900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.529022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.529049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.529191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.529217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.529375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.529402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.529543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.529569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.529691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.529717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.529864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.529890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.530011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.530037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.530155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.530182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.530310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.530337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.530488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.530515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.530638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.530664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.530786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.530812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.530960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.530987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.531147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.531173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.531297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.531324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.531451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.531477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.531593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.531619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.531742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.531769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.531913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.531941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.532071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.532097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.532216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.532260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.532390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.532417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.532571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.532598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.532753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.532780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.532934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.532961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.533107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.533132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.533255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.533282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.533402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.533428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.533578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.533604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.533755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.533781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.533901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.533927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.534105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.534131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.534264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.534292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.534426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.534454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.534585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.534612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.534796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.947 [2024-07-25 05:54:13.534823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.947 qpair failed and we were unable to recover it. 00:34:19.947 [2024-07-25 05:54:13.534945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.534976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.535154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.535181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.535312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.535340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.535515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.535542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.535671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.535698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.535861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.535888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.536011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.536049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.536200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.536227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.536403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.536430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.536557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.536584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.536710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.536736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.536891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.536917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.537039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.537065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.537221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.537255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.537417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.537443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.537566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.537592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.537745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.537771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.537886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.537912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.538039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.538066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.538228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.538259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.538410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.538436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.538554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.538581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.538709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.538737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.538886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.538912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.539044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.539070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.539216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.539247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.539405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.539432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.539571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.539612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.539745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.539773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.539904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.539931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.540086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.540113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.540233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.540267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.540497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.540524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.540677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.540704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.540873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.540899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.541042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.541069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.541218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.541252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.541400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.541426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.541554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.541581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.541809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.948 [2024-07-25 05:54:13.541836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.948 qpair failed and we were unable to recover it. 00:34:19.948 [2024-07-25 05:54:13.541987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.542019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.542149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.542177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.542294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.542320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.542549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.542577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.542726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.542753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.542879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.542906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.543035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.543062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.543188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.543215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.543386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.543413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.543535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.543563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.543693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.543721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.543868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.543894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.544049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.544076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.544200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.544227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.544374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.544403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.544588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.544614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.544771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.544798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.544920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.544947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.545094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.545121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.545247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.545275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.545427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.545453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.545631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.545658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.545784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.545810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.545935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.545961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.546110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.546136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.546274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.546301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.546420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.546447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.546623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.546664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.546798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.546828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.546984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.547011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.547137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.547164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.547296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.547324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.547472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.547499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.547661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.547687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.547855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.547882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.547998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.548024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.548147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.548173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.548290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.548318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.548468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.548495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.548616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.548643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.548762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.548788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.548924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.548951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.549093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.549132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.949 qpair failed and we were unable to recover it. 00:34:19.949 [2024-07-25 05:54:13.549299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.949 [2024-07-25 05:54:13.549328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.549473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.549500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.549622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.549649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.549776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.549803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.549929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.549955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.550110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.550139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.550297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.550324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.550448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.550475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.550594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.550621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.550739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.550766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.550901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.550929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.551053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.551087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.551216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.551249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.551405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.551431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.551547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.551574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.551726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.551752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.551898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.551924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.552047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.552075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.552203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.552229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.552360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.552387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.552510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.552539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.552658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.552685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.552810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.552837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.552985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.553011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.553159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.553186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.553311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.553338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.553456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.553483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.553630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.553657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.553829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.553856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.553977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.554004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.554119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.554146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.554275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.554303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.554423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.554451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.554577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.554603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.554764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.554791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.554922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.554949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.555085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.555112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.555298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.555340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.555481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.555515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.555695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.555723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.555851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.555880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.556011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.556039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.556170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.556199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.556354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.556382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.556526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.556565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.556726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.556754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.556880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.556908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.557063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.557090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.557221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.557254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.557381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.557408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.557558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.557585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.557753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.557781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.557915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.557943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.558066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.558094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.558224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.558256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.558411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.558438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.558571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.558598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.558724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.558752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.558864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.950 [2024-07-25 05:54:13.558891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.950 qpair failed and we were unable to recover it. 00:34:19.950 [2024-07-25 05:54:13.559020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.559047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.559171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.559198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.559351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.559378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.559556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.559582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.559752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.559778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.559953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.559980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.560126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.560153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.560299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.560327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.560480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.560507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.560654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.560680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.560794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.560821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.560950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.560976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.561131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.561157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.561305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.561332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.561492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.561519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.561669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.561695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.561896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.561923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.562068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.562095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.562252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.562279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.562427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.562458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.562639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.562666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.562816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.562842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.562998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.563024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.563178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.563205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.563362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.563389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.563551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.563578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.563703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.563730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.563852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.563878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.564028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.564054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.564198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.564225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.564347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.564374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.564502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.564529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.564683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.564709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.564868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.564894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.565018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.565044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.565164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.565191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.565359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.565387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.565540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.565566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.565696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.565723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.565852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.565880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.566036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.566062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.566213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.566240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.566399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.566428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.566564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.566591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.566739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.566765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.566893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.566919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.567044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.567072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.567219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.567249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.567431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.567457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.567611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.567638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.567816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.567843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.568018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.568044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.568195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.568222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.568384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.568410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.568563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.568589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.568705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.568732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.568914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.568940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.951 qpair failed and we were unable to recover it. 00:34:19.951 [2024-07-25 05:54:13.569063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.951 [2024-07-25 05:54:13.569090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.569249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.569276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.569423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.569453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.569612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.569638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.569767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.569793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.569943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.569969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.570113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.570139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.570292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.570319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.570494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.570521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.570697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.570723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.570850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.570877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.571051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.571077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.571197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.571224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.571350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.571376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.571495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.571521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.571701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.571726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.571883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.571909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.572035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.572061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.572233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.572264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.572413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.572439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.572585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.572611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.572765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.572792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.572932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.572959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.573085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.573113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.573283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.573310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.573490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.573516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.573633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.573660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.573845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.573872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.574025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.574051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.574214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.574240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.574368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.574396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.574549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.574576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.574742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.574768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.574920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.574947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.575096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.575122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.575256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.575283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.575432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.575458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.575607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.575633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.575757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.575783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.575964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.575990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.576120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.576146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.576309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.576350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.576490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.576536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.576689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.576717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.576871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.576896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.577056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.577081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.577198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.577224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.577383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.577408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.577556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.577580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.577729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.577753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.577891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.577917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.578094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.952 [2024-07-25 05:54:13.578119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.952 qpair failed and we were unable to recover it. 00:34:19.952 [2024-07-25 05:54:13.578231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.578261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.578413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.578438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.578569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.578607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.578791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.578816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.579054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.579081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.579259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.579286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.579416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.579442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.579595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.579620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.579768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.579794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.579944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.579998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.580126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.580152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.580293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.580319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.580446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.580473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.580628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.580653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.580911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.580963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.581116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.581142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.581265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.581291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.581420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.581450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.581577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.581604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.581753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.581778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.581930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.581958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.582107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.582133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.582277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.582305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.582476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.582503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.582649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.953 [2024-07-25 05:54:13.582676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:19.953 qpair failed and we were unable to recover it. 00:34:19.953 [2024-07-25 05:54:13.582804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.582831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.582957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.582984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.583111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.583137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.583253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.583280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.583402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.583429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.583549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.583577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.583696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.583723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.583875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.583902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.584025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.584052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.584203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.584230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.584388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.584415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.584536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.584563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.584682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.584709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.584848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.584875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.585002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.585029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.585176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.585202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 05:54:13.585331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 05:54:13.585357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.585479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.585507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.585650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.585677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.585804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.585830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.585955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.585982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.586094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.586121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.586262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.586289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.586429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.586456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.586577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.586603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.586736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.586763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.586885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.586913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.587046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.587073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.587194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.587222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.587357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.587384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.587522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.587559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.587719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.587748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.587875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.587908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.588078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.588105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.588246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.588274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.588414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.588441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.588603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.588629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.588791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.588817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.588948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.588975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.589093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.589120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.589251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.589278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.589402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.589428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.589546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.589573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.589726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.589753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 05:54:13.589869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 05:54:13.589895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.590050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.590076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.590200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.590227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.590360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.590386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.590519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.590548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.590688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.590714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.590829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.590857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.590978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.591005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.591145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.591185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.591317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.591346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.591471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.591499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.591648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.591675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.591791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.591817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.591963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.591990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.592140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.592167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.592317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.592348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.592473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.592500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.592666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.592692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.592817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.592843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.592965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.592992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.593140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.593166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.593314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.593341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.593500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.593526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.593658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.593684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.593799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.593825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.593959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.593985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.594110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.594136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.594283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.594310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.594433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.594459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.594588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.594614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.594761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.594787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.594932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.594958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.595148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.595174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.595331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.595359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 05:54:13.595480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 05:54:13.595508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.595687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.595713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.595866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.595894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.596079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.596106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.596247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.596295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.596458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.596493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.596613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.596640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.596770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.596799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.596954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.596981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.597138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.597164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.597295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.597324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.597441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.597468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.597590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.597616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.597766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.597793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.597926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.597952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.598097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.598123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.598250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.598278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.598393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.598419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.598542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.598568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.598686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.598712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.598859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.598885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.599010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.599037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.599166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.599194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.599318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.599346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.599465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.599491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.599668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.599695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.599823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.599849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.599996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.600022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.600141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.600167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.600296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.600322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.600500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.600527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.600646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.600672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.600809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.600835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.601009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.601035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 05:54:13.601187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 05:54:13.601213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.601365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.601392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.601547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.601573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.601739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.601765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.601940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.601967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.602129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.602155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.602296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.602322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.602475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.602501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.602660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.602687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.602839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.602865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.603018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.603045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.603169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.603195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.603355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.603382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.603502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.603529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.603648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.603675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.603798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.603828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.604004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.604031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.604145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.604172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.604319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.604347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.604489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.604516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.604641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.604667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.604793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.604820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.604936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.604962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.605083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.605109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.605254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.605281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.605405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.605431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.605576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.605603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.605754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.605780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.605927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.605955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.606139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.606166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.606288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.606316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.606491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.606517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.606646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.606672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.606844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.606871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.607046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.607073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.607252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.607279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.607401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.607427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 05:54:13.607577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 05:54:13.607603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.607754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.607781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.607958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.607984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.608137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.608163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.608295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.608322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.608448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.608480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.608631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.608658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.608832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.608858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.609002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.609029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.609182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.609209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.609387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.609413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.609566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.609593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.609727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.609753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.609873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.609900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.610053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.610079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.610211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.610237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.610416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.610442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.610560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.610586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.610748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.610774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.610902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.610930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.611093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.611119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.611275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.611301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.611461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.611487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.611641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 05:54:13.611668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 05:54:13.611828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.611855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.611969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.611995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.612146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.612172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.612300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.612327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.612453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.612480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.612633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.612659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.612781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.612807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.612926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.612954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.613082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.613109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.613226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.613257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.613413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.613440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.613562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.613588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.613743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.613771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.613947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.613974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.614127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.614155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.614339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.614366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.614544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.614571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.614719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.614745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.614898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.614925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.615080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.615107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.615260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.615287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.615434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.615461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.615615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.615647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.615795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.615821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.615999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.616025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.616156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.616182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.616330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.616356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.616510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.616536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.616687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.616713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.616889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.616915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.617031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.617058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.617234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.617265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.617419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.617446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.617608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.617635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.617811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.617837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 05:54:13.617987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 05:54:13.618013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.618168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.618194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.618356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.618383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.618559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.618586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.618735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.618761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.618890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.618917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.619066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.619093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.619235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.619266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.619387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.619414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.619550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.619576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.619704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.619732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.619892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.619918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.620062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.620089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.620235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.620268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.620394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.620425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.620603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.620630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.620778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.620804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.620981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.621007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.621134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.621161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.621311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.621338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.621491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.621517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.621662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.621688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.621840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.621867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.622015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.622041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.622159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.622187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.622345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.622372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.622518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.622544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.622699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.622726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.622854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.622881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.623029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.623055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.623282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.623309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.623534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.623560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.623693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.623719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.623893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.623919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.624039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.624065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.624187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.624214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.624359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.624386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 05:54:13.624534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 05:54:13.624561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.624715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.624741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.624862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.624888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.625042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.625068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.625217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.625250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.625384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.625412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.625561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.625588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.625736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.625762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.625908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.625934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.626097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.626123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.626259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.626285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.626438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.626465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.626604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.626630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.626806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.626833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.626961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.626988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.627108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.627135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.627284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.627311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.627438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.627465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.627608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.627639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.627816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.627842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.627963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.627991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.628136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.628162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.628300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.628326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.628506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.628533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.628676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.628702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.628852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.628878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.629051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.629077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.629222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.629253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.629410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.629437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.629584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.629610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.629756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.629782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.629928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.629955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.630141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.630167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.630324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.630351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.630477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.630504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.630730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.630756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.630907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 05:54:13.630933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 05:54:13.631076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.631103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.631255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.631282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.631458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.631484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.631662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.631689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.631838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.631865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.632039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.632065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.632208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.632234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.632374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.632401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.632526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.632556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.632709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.632737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.632909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.632936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.633089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.633115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.633294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.633321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.633448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.633474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.633596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.633622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.633737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.633764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.633915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.633942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.634087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.634113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.634235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.634268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.634390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.634426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.634570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.634596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.634725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.634753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.634911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.634938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.635115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.635142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.635316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.635343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.635461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.635488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.635649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.635676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.635835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.635861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.636040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.636066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.636223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.636255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.636406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.636432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.636577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 05:54:13.636603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 05:54:13.636753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.636779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.636951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.636977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.637123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.637149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.637275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.637302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.637426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.637454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.637616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.637642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.637757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.637783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.637928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.637955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.638115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.638142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.638301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.638328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.638480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.638507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.638646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.638672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.638812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.638838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.638960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.638988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.639102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.639129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.639275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.639302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.639455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.639482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.639628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.639659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.639786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.639813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.639963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.639990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.640148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.640174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.640298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.640325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.640444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.640472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.640594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.640621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.640771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.640798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.640951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.640978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.641151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.641178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.641328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.641355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.641507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.641534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 05:54:13.641708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 05:54:13.641735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.641885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.641911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.642076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.642102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.642290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.642317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.642442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.642469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.642587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.642613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.642767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.642793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.642944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.642970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.643130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.643156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.643311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.643338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.643498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.643524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.643680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.643706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.643827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.643855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.644017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.644044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.644166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.644193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.644342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.644369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.644531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.644557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.644706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.644733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.644883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.644909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.645056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.645083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.645199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.645225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.645373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.645400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.645543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.645569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.645713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.645739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.645926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.645952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.646098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.646125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.646278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.646305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 05:54:13.646459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 05:54:13.646486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.646639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.646667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.646804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.646831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.646960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.646988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.647162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.647189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.647308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.647336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.647521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.647548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.647701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.647728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.647909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.647936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.648089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.648115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.648266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.648293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.648413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.648440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.648585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.648612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.648729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.648756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.648875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.648901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.649063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.649089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.649251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.649278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 05:54:13.649432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 05:54:13.649459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.649611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.649638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.649788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.649814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.649935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.649961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.650134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.650161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.650314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.650341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.650492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.650519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.650652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.650678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.650805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.650833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.650989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.651015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.651137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.651165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.651323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.651352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.651506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.651538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.651717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.651744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.651890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.651917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.652066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.652093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.652213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.652239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.652367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.652394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.652567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.652594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 05:54:13.652727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 05:54:13.652753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.652907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.652933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.653108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.653134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.653287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.653315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.653432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.653459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.653612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.653638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.653751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.653778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.653937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.653963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.654092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.654119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.654270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.654297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.654417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.654444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.654573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.654600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.654753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.654780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.654929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.654957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.655076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.655102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.655218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.655259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.655386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.655413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.655534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.655560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.655707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.655735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.655889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.655916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.656070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.656096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.656221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.656256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 05:54:13.656431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 05:54:13.656458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.656603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.656629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.656780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.656807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.656922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.656949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.657081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.657108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.657291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.657330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.657466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.657494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.657613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.657639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.657788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.657815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.657937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.657964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.658089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.658115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.658288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.658315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.658445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.658473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.658618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.658644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.658798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.658825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.658980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.659007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.659159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.659185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.659307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.659334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.659463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.659489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.659611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.659638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.659771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 05:54:13.659797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 05:54:13.659921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.659948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.660123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.660150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.660356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.660383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.660499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.660526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.660642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.660669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.660827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.660853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.660999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.661025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.661198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.661224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.661410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.661438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.661584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.661610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.661757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.661784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.661931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.661957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.662075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.662102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.662220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.662252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.662427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.662453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.662607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.662633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.662756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.662782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.662931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.662958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.663135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.663166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.663326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.663354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 05:54:13.663504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 05:54:13.663532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.663662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.663690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.663839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.663866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.664043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.664069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.664199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.664227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.664393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.664420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.664573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.664600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.664732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.664759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.664913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.664940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.665091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.665117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.665269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.665296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.665420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.665446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.665601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.665629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.665751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.665778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.665931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.665957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.666131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.666157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.666290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.666318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.666473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.666500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.666627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.666653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.666804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.666832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.666985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.667011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.667175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.667201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.667352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 05:54:13.667379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 05:54:13.667501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.667527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.667667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.667694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.667871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.667898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.668029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.668056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.668229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.668261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.668411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.668438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.668580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.668607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.668755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.668781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.668954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.668981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.669103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.669129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.669259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.669287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.669467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.669494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.669620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.669647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.669805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.669832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.669982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.670009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.670158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.670184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.670315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.670346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.670494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.670521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.670638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.670666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.670843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.670870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.671021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.671048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.671197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.671224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 05:54:13.671374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 05:54:13.671402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.671558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.671585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.671702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.671728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.671899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.671926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.672109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.672136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.672284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.672311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.672457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.672484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.672659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.672686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.672837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.672864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.673043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.673069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.673219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.673253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.673388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.673415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.673574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.673600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.673755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.673782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.673959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.673986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 05:54:13.674111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 05:54:13.674137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.674290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.674317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.674439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.674467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.674642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.674668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.674850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.674876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.675056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.675083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.675206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.675236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.675394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.675421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.675540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.675568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.675749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.675776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.675943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.675969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.676151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.676178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.676324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.676351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.676471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.676498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.676620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.676647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.676789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.676815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.676970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.676997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.677141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.677168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.677346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.677373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.677505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.677532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.677691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.677717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.677889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.677916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.678095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.678122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.678270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.678297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.678483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.678509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 05:54:13.678661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 05:54:13.678687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.678847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.678874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.679050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.679077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.679223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.679266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.679422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.679448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.679570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.679596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.679741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.679768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.679914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.679939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.680063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.680089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.680238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.680269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.680443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.680470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.680581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.680607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.680758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.680785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.680932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.680958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.681109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.681136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.681284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.681311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.681459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.681485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.681632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.681658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.681803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.681829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.681979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.682005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.682128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.682154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.682297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.682324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.682452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.682482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.682607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.682634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 05:54:13.682758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 05:54:13.682786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.682964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.682991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.683116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.683143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.683312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.683339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.683510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.683537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.683680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.683706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.683868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.683894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.684033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.684060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.684176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.684203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.684327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.684355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.684533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.684560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.684713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.684740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.684898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.684924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.685081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.685107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.685252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.685279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.685409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.685435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.685561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.685587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.685736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.685762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.685910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.685936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.686082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.686108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.686285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.686312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.686457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.686484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 05:54:13.686633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 05:54:13.686660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.686812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.686838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.686983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.687009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.687163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.687196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.687333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.687361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.687534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.687560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.687714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.687741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.687893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.687919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.688096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.688122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.688265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.688293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.688461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.688487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.688642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.688668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.688814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.688840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.689016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.689042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.689164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.689191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.689337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.689363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.689512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.689539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.689720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.689746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.689933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.689959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.690103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.690129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.690286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.690314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.690456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.690483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.690618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.690645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.690763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.690791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.690911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.690938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 05:54:13.691086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 05:54:13.691112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.691259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.691286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.691396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.691422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.691545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.691573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.691697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.691723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.691897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.691924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.692080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.692107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.692229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.692260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.692384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.692410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.692555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.692581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.692731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.692755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.692906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.692932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.693060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.693086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.693264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.693291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.693435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.693461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.693605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.693631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.693777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.693804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.693951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.693977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.694165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.694191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.694333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.694364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.694520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.694547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.694693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.694719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.694901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.694927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.695049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.695075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.695192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.695218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.695360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.695387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.695515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.695541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.695664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.695690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 05:54:13.695839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 05:54:13.695866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.696025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.696051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.696172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.696198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.696351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.696378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.696509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.696535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.696668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.696696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.696873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.696899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.697071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.697097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.697249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.697276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.697420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.697446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.697571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.697597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.697760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.697786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.697962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.697988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.698106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.698133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.698311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.698338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.698487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.698514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.698699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.698726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.698854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.698881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.699024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.699051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.699181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.699207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.699359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.699386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.699508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.699535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.699656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.699683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.699860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.699886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.700038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.700065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.700189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.700215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.700331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 05:54:13.700357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 05:54:13.700530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.700557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.700713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.700740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.700893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.700919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.701039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.701066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.701216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.701247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.701404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.701431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.701582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.701608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.701767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.701793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.701916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.701943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.702062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.702089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.702208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.702235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.702387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.702413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.702589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.702616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.702766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.702792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.702979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.703005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.703155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.703182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.703335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.703361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.703512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.703539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.703660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 05:54:13.703686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 05:54:13.703823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.703850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.704030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.704056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.704205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.704233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.704390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.704416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.704557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.704583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.704709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.704737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.704886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.704912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.705061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.705088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.705246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.705273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.705428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.705454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.705629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.705655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.705768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.705795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.705974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.706001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.706126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.706157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.706313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.706340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.706468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.706496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.706613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.706639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.706765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.706791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.706935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.706961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.707082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.707108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.707226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 05:54:13.707259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 05:54:13.707374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.707400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.707516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.707542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.707720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.707746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.707891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.707917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.708070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.708096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.708255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.708282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.708438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.708464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.708585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.708611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.708767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.708793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.708970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.708998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.709174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.709200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.709332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.709358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.709481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.709507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.709654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.709680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.709827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.709853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.709975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.710003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.710129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.710156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.710285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.710312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.710462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.710489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.710664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.710690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.710824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.710850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.710997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.711023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 05:54:13.711172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 05:54:13.711198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.711379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.711406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.711529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.711555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.711673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.711701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.711846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.711872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.712012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.712038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.712188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.712214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.712374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.712401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.712533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.712560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.712684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.712710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.712858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.712884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.713038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.713065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.713192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.713219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.713346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.713373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.713526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.713552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.713700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.713726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.713875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.713902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.714049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.714075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.714192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.714218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.714351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.714378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.714531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.714557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.714667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.714693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 05:54:13.714845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 05:54:13.714873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.715026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.715052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.715195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.715221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.715396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.715422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.715546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.715573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.715751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.715777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.715905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.715932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.716081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.716108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.716266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.716293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.716412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.716439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.716588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.716614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.716762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.716789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.716938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.716964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.717143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.717169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.717326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.717353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.717502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.717528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.717700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.717730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.717901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.717927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.718055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.718081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.718259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.718285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.718435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.718462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.718617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.718644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.718791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.268 [2024-07-25 05:54:13.718817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.268 qpair failed and we were unable to recover it. 00:34:20.268 [2024-07-25 05:54:13.718950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.718977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.719132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.719158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.719312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.719339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.719491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.719518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.719671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.719698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.719847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.719874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.720024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.720050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.720198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.720225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.720378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.720404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.720530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.720556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.720672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.720698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.720843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.720870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.721019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.721045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.721220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.721252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.721385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.721412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.721528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.721554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.721671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.721697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.721869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.721895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.722017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.722043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.722164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.722190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.722350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.722377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.722527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.722553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.269 [2024-07-25 05:54:13.722682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.269 [2024-07-25 05:54:13.722709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.269 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.722824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.722851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.723018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.723044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.723163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.723188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.723342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.723370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.723544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.723570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.723720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.723747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.723932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.723959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.724079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.724105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.724256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.724288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.724414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.724440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.724603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.724629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.724780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.724812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.724964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.724990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.725147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.725174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.725323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.725350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.725490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.725517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.725666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.725692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.725834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.725859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.726012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.726038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.726192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.726219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.726360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.726389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.726566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.726592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.726733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.726760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.270 qpair failed and we were unable to recover it. 00:34:20.270 [2024-07-25 05:54:13.726913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.270 [2024-07-25 05:54:13.726940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.727116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.727143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.727318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.727346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.727474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.727501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.727652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.727679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.727796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.727823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.727944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.727970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.728151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.728178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.728310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.728337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.728489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.728516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.728660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.728686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.728836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.728862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.728983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.729009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.729159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.729185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.729320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.729347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.729520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.729551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.729676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.729702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.729829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.729856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.730014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.730040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.730170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.730196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.730344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.730371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.730484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.730510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.730645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.730671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.730788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.730816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.271 [2024-07-25 05:54:13.730948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.271 [2024-07-25 05:54:13.730974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.271 qpair failed and we were unable to recover it. 00:34:20.272 [2024-07-25 05:54:13.731095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.272 [2024-07-25 05:54:13.731122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.272 qpair failed and we were unable to recover it. 00:34:20.272 [2024-07-25 05:54:13.731248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.272 [2024-07-25 05:54:13.731274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.272 qpair failed and we were unable to recover it. 00:34:20.272 [2024-07-25 05:54:13.731429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.272 [2024-07-25 05:54:13.731456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.272 qpair failed and we were unable to recover it. 00:34:20.272 [2024-07-25 05:54:13.731578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.272 [2024-07-25 05:54:13.731605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.272 qpair failed and we were unable to recover it. 00:34:20.272 [2024-07-25 05:54:13.731754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.272 [2024-07-25 05:54:13.731780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.272 qpair failed and we were unable to recover it. 00:34:20.272 [2024-07-25 05:54:13.731903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.272 [2024-07-25 05:54:13.731929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.272 qpair failed and we were unable to recover it. 00:34:20.272 [2024-07-25 05:54:13.732078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.272 [2024-07-25 05:54:13.732105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.272 qpair failed and we were unable to recover it. 00:34:20.272 [2024-07-25 05:54:13.732227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.272 [2024-07-25 05:54:13.732266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.272 qpair failed and we were unable to recover it. 00:34:20.272 [2024-07-25 05:54:13.732416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.272 [2024-07-25 05:54:13.732443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.272 qpair failed and we were unable to recover it. 00:34:20.272 [2024-07-25 05:54:13.732562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.272 [2024-07-25 05:54:13.732588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.272 qpair failed and we were unable to recover it. 00:34:20.272 [2024-07-25 05:54:13.732724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.272 [2024-07-25 05:54:13.732750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.272 qpair failed and we were unable to recover it. 00:34:20.272 [2024-07-25 05:54:13.732869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.732897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.733038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.733067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.733187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.733214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.733335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.733363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.733520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.733546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.733701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.733727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.733852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.733878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.734009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.734036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.734153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.734181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.734299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.734327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.734451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.734477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.734596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.734622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.734749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.734776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.734923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.734949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.735070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.735097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.735213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.735239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.735398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.735424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.735543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.735569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.735719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.735746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.735915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.735939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.736094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.736122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.736272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.736297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.273 [2024-07-25 05:54:13.736449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.273 [2024-07-25 05:54:13.736472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.273 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.736599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.736624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.736751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.736775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.736925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.736948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.737101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.737125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.737266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.737292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.737437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.737461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.737600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.737625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.737773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.737798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.737947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.737973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.738131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.738156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.738281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.738309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.738434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.738459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.738582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.738607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.738729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.738755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.738874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.738899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.739044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.739069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.739192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.739217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.739361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.739387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.739559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.739585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.739711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.739736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.739887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.739913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.740040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.740065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.740185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.740211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.740350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.740376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.740505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.740534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.274 qpair failed and we were unable to recover it. 00:34:20.274 [2024-07-25 05:54:13.740694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.274 [2024-07-25 05:54:13.740719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.740886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.740910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.741065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.741089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.741209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.741233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.741365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.741391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.741510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.741534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.741649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.741674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.741825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.741850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.741974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.741999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.742117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.742142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.742269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.742295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.742416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.742442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.742586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.742612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.742816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.742857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.743047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.743076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.743231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.743265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.743391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.743418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.743570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.743597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.743720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.743747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.743870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.743897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.744023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.744050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.744169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.744196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.744326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.744354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.744530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.744556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.744732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.744758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.275 qpair failed and we were unable to recover it. 00:34:20.275 [2024-07-25 05:54:13.744880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.275 [2024-07-25 05:54:13.744906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.745030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.745061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.745222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.745254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.745462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.745488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.745615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.745642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.745767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.745793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.745918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.745944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.746070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.746096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.746228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.746269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.746395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.746421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.746554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.746581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.746734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.746760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.746900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.746927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.747043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.747069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.747217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.747250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.747396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.747437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.747578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.747609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.747845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.747872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.748001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.748028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.748176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.748202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.748335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.748363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.748530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.748557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.748732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.748759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.748902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.748930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.749083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.276 [2024-07-25 05:54:13.749110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.276 qpair failed and we were unable to recover it. 00:34:20.276 [2024-07-25 05:54:13.749255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.749284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.749418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.749446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.749606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.749633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.749778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.749816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.749947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.749975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.750105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.750132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.750286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.750315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.750469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.750497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.750646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.750673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.750796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.750823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.750966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.750992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.751118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.751144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.751284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.751312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.751441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.751469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.751620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.751647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.751774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.751801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.751964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.751991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.752150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.752177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.752340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.752368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.752514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.752541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.752699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.752725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.752890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.752918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.753037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.277 [2024-07-25 05:54:13.753064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.277 qpair failed and we were unable to recover it. 00:34:20.277 [2024-07-25 05:54:13.753215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.753247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.753392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.753419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.753561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.753588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.753708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.753735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.753862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.753889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.754012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.754039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.754183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.754210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.754370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.754410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.754541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.754570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.754699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.754727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.754880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.754908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.755033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.755061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.755191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.755217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.755349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.755377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.755510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.755537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.755663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.755690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.755815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.755842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.756021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.756048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.756172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.756199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.756387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.756415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.756561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.756593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.756718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.756745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.278 [2024-07-25 05:54:13.756889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.278 [2024-07-25 05:54:13.756916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.278 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.757030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.757057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.757207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.757234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.757368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.757397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.757522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.757550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.757702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.757730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.757892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.757921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.758053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.758079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.758228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.758261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.758407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.758433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.758555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.758581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.758707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.758735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.758881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.758908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.759034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.759059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.759207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.759233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.759366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.279 [2024-07-25 05:54:13.759392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.279 qpair failed and we were unable to recover it. 00:34:20.279 [2024-07-25 05:54:13.759522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.759549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.759725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.759751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.759893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.759919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.760070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.760096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.760273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.760300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.760427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.760453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.760606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.760633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.760758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.760784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.760954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.760983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.761117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.761144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.761326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.761354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.761501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.761528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.761647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.761674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.761837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.761864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.762045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.762072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.762193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.762221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.280 [2024-07-25 05:54:13.762354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.280 [2024-07-25 05:54:13.762382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.280 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.762497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.762524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.762644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.762671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.762796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.762823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.762980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.763009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.763160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.763187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.763343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.763375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.763528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.763554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.763703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.763731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.763880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.763907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.764064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.764091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.764253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.764293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.764421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.764448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.764600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.764626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.764814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.764840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.764995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.765021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.765146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.765175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.765351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.765379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.765537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.765564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.765682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.765709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.765834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.765861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.765989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.766017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.766172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.281 [2024-07-25 05:54:13.766200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.281 qpair failed and we were unable to recover it. 00:34:20.281 [2024-07-25 05:54:13.766342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.766370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.766520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.766547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.766699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.766726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.766861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.766887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.767020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.767047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.767173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.767201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.767250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5fd620 (9): Bad file descriptor 00:34:20.282 [2024-07-25 05:54:13.767451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.767493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.767626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.767654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.767811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.767840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.767970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.767998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.768135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.768167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.768297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.768327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.768475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.768503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.768629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.768656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.768800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.768826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.768955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.768982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.769113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.769140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.769267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.769295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.769451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.769480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.769609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.769636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.769772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.769799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.769922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.769951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.770116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.282 [2024-07-25 05:54:13.770150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.282 qpair failed and we were unable to recover it. 00:34:20.282 [2024-07-25 05:54:13.770280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.770308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.770429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.770456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.770576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.770604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.770759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.770786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.770923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.770950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.771103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.771130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.771287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.771317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.771454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.771483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.771606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.771633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.771782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.771810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.771941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.771970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.772123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.772150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.772300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.772327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.772483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.772515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.772661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.772687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.772823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.772850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.772975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.773002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.773149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.773176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.773300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.773327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.773473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.773501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.283 qpair failed and we were unable to recover it. 00:34:20.283 [2024-07-25 05:54:13.773629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.283 [2024-07-25 05:54:13.773656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.773779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.773806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.773933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.773960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.774127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.774154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.774309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.774337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.774488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.774515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.774639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.774667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.774829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.774857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.774986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.775013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.775137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.775164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.775313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.775341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.775466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.775494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.775618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.775645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.775795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.775822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.775975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.776002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.776168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.776208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.776355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.776384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.776540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.776567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.776722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.776750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.776895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.776922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.777050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.777080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.777208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.777235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.777439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.777467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.777592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.777620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.284 qpair failed and we were unable to recover it. 00:34:20.284 [2024-07-25 05:54:13.777746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.284 [2024-07-25 05:54:13.777774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.777895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.777922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.778078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.778107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.778249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.778278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.778404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.778431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.778550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.778576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.778691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.778718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.778859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.778887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.779017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.779044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.779194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.779227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.779391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.779431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.779566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.779594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.779740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.779766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.779892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.779921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.780081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.780108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.780263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.780290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.780423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.780451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.780601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.780627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.780750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.780777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.780905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.780932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.781080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.781106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.781262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.781290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.781443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.781470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.781653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.781686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.781880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.781908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.782058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.782085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.782219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.285 [2024-07-25 05:54:13.782254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.285 qpair failed and we were unable to recover it. 00:34:20.285 [2024-07-25 05:54:13.782416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.782457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.782590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.782620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.782781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.782810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.782964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.782992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.783121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.783147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.783314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.783354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.783488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.783517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.783647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.783673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.783841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.783869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.784019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.784057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.784190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.784217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.784382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.784410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.784539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.784566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.784697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.784725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.784868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.784896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.785014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.785042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.785182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.785210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.785347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.785377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.785520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.785559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.785715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.785743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.785869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.785897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.786042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.786069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.786215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.786248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.786409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.786435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.786558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.786585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.786817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.286 [2024-07-25 05:54:13.786843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.286 qpair failed and we were unable to recover it. 00:34:20.286 [2024-07-25 05:54:13.787020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.787047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.787223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.787255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.787380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.787406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.787531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.787558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.787708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.787735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.787857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.787882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.787998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.788024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.788173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.788200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.788327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.788354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.788487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.788513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.788699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.788729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.788866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.788892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.789011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.789037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.789157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.789183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.789332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.789359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.789475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.789501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.789645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.789671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.789810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.789837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.789986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.790012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.790168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.790194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.790345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.790372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.790494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.790521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.790654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.287 [2024-07-25 05:54:13.790680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.287 qpair failed and we were unable to recover it. 00:34:20.287 [2024-07-25 05:54:13.790803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.790829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.790967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.791007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.791205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.791233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.791390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.791420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.791552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.791579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.791711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.791743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.791897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.791924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.792097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.792130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.792268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.792296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.792444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.792471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.792632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.792659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.792811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.792838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.793000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.793027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.793164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.793190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.793334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.793367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.793503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.793531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.793741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.793775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.793910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.793937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.794091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.794118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.794273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.794300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.794446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.794474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.794610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.794638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.794772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.794802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.794942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.794970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.795103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.795130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.795303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.795343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.795476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.795514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.795651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.795678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.795826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.795852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.796007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.288 [2024-07-25 05:54:13.796036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.288 qpair failed and we were unable to recover it. 00:34:20.288 [2024-07-25 05:54:13.796166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.796193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.796340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.796367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.796501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.796528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.796663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.796689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.796815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.796843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.796987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.797015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.797143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.797169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.797300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.797333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.797481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.797522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.797685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.797713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.797838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.797865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.797987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.798019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.798142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.798169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.798348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.798375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.798503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.798541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.798664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.798692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.798817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.798844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.798976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.799004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.799138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.799165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.799314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.799341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.799493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.799520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.799649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.799676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.799805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.799831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.799961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.799989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.800108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.800135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.800332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.800371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.800535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.800563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.800687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.800715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.800867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.800894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.801012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.801039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.801169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.801196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.801350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.289 [2024-07-25 05:54:13.801377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.289 qpair failed and we were unable to recover it. 00:34:20.289 [2024-07-25 05:54:13.801532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.801560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.801717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.801745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.801899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.801926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.802049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.802076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.802228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.802261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.802422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.802450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.802626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.802654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.802773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.802801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.802928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.802955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.803109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.803135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.803294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.803321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.803451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.803477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.803634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.803661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.803787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.803814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.803938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.803966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.804124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.804153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.804301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.804329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.804456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.804485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.804618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.804645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.804763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.804793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.804916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.804943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.805123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.805150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.805272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.805311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.805461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.805488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.805641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.805668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.805916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.805950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.806110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.806139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.806292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.806331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.806491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.806522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.806660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.806688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.806853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.806879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.807026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.290 [2024-07-25 05:54:13.807053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.290 qpair failed and we were unable to recover it. 00:34:20.290 [2024-07-25 05:54:13.807187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.807213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.807353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.807379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.807506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.807533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.807684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.807710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.807836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.807862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.807977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.808004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.808130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.808157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.808292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.808319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.808461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.808487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.808607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.808634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.808752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.808780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.808906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.808932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.809089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.809115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.809279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.809308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.809460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.809490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.809621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.809647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.809795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.809821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.809968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.809994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.810117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.810143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.810298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.810339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.810509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.810537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.810706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.810733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.810884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.810911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.811061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.811089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.811250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.811277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.811442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.811468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.811622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.811649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.291 [2024-07-25 05:54:13.811799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.291 [2024-07-25 05:54:13.811825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.291 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.811958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.811984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.812123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.812149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.812291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.812318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.812466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.812491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.812661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.812687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.812840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.812866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.813027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.813053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.813180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.813206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.813367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.813395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.813554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.813581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.813726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.813752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.813899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.813925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.814078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.814105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.814231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.814269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.814430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.814459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.814588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.814614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.814761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.814787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.814919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.814946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.815067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.815093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.815253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.815281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.815431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.815457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.815604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.815644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.815807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.815835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.815957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.815984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.816147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.292 [2024-07-25 05:54:13.816175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.292 qpair failed and we were unable to recover it. 00:34:20.292 [2024-07-25 05:54:13.816362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.816390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.816518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.816546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.816709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.816738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.816863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.816889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.817053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.817080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.817210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.817236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.817402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.817429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.817576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.817602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.817774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.817800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.817948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.817974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.818123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.818150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.818305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.818331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.818459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.818486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.818631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.818657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.293 qpair failed and we were unable to recover it. 00:34:20.293 [2024-07-25 05:54:13.818791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.293 [2024-07-25 05:54:13.818818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.818973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.818999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.819160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.819187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.819337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.819373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.819516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.819543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.819704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.819730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.819880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.819907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.820068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.820094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.820233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.820267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.820407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.820432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.820570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.820596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.820716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.820742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.820887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.820913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.821064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.821090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.821253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.821280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.821437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.821467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.821622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.821648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.821822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.821848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.821983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.822009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.822157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.822183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.822310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.822336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.822458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.822484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.822635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.822661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.822806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.822832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.822960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.822986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.823140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.823168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.294 [2024-07-25 05:54:13.823329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.294 [2024-07-25 05:54:13.823368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.294 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.823505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.823533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.823666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.823693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.823872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.823898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.824052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.824080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.824232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.824265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.824403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.824431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.824583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.824610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.824786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.824813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.824967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.824993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.825123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.825151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.825305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.825331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.825474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.825500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.825626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.825653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.825776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.825803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.825951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.825978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.826129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.826159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.826323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.826350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.826475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.826503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.826657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.826684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.826833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.826860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.827017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.827043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.827167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.827193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.827338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.827364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.827482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.827520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.827641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.827668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.827817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.827845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.827994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.828020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.828199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.828225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.828366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.828392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.828520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.828547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.828699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.828726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.828875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.828902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.829086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.829113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.829263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.829299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.829427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.829453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.829616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.829643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.829770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.829797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.829924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.829950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.830104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.830130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.830257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.830296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.830444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.295 [2024-07-25 05:54:13.830470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.295 qpair failed and we were unable to recover it. 00:34:20.295 [2024-07-25 05:54:13.830628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.830654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.830834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.830861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.830986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.831013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.831162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.831189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.831354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.831380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.831530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.831556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.831691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.831718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.831869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.831896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.832018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.832044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.832180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.832207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.832368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.832394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.832521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.832547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.832668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.832699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.832824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.832852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.832994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.833021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.833198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.833239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.833390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.833420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.833602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.833629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.833753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.833780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.833941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.833968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.834122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.834150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.834308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.834335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.834500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.834527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.834681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.834709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.834835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.834863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.835022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.835050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.835172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.835199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.835390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.835417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.835572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.835604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.835738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.835765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.835944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.835972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.836147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.836174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.836320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.836347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.836474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.836514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.836664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.836692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.836845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.836873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.837030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.837057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.837217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.837249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.837402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.837429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.837582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.837609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.837788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.837815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.837994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.838021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.838175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.838202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.838373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.838400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.838559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.838586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.838711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.838739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.838890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.838917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.839033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.839059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.839213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.839240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.839400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.839428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.839581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.839608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.296 qpair failed and we were unable to recover it. 00:34:20.296 [2024-07-25 05:54:13.839727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.296 [2024-07-25 05:54:13.839754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.839882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.839910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.840056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.840084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.840204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.840230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.840417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.840457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.840619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.840648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.840771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.840799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.840955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.840983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.841157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.841184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.841330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.841369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.841497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.841526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.841684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.841711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.841840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.841869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.842015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.842045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.842169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.842198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.842371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.842411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.842546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.842575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.842753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.842781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.842935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.842962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.843142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.843169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.843294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.843321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.843447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.843474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.843606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.843634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.843812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.843840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.843993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.844020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.844168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.844196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.844384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.844412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.844536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.844564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.844719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.844747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.844885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.844913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.845060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.845087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.845250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.845278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.845430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.845457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.845606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.845634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.845754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.845781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.845927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.845955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.846067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.846095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.846251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.846279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.846455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.846482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.846637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.846665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.846839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.846866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.847039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.847067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.847213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.847240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.847397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.847424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.847577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.847609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.847762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.847791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.847967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.297 [2024-07-25 05:54:13.847994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.297 qpair failed and we were unable to recover it. 00:34:20.297 [2024-07-25 05:54:13.848148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.848175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.848342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.848369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.848547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.848574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.848696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.848725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.848879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.848907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.849053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.849081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.849236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.849268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.849395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.849421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.849572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.849599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.849769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.849795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.849971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.849997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.850151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.850178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.850301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.850327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.850461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.850489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.850644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.850672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.850848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.850876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.850996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.851023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.851175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.851204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.851336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.851364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.851543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.851570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.851697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.851724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.851847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.851875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.851987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.298 [2024-07-25 05:54:13.852014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.298 qpair failed and we were unable to recover it. 00:34:20.298 [2024-07-25 05:54:13.852168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.852195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.852332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.852360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.852484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.852511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.852658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.852684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.852857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.852883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.853026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.853068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.853257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.853287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.853415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.853443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.853596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.853623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.853783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.853810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.853935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.853963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.854124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.854153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.854305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.854333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.854486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.854513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.854665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.854697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.854861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.854888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.855009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.855038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.855163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.855192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.855374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.855403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.855553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.855580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.855732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.855760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.855908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.855935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.856062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.856089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.856248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.856277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.856407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.856435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.856591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.856618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.856772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.856800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.856955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.856982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.857140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.857168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.857347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.857377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.857529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.857556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.857674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.857702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.857857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.857883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.858037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.858064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.858221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.858253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.858421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.858449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.858594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.858621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.858775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.858802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.858955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.858982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.859130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.859157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.859308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.859336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.859518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.859546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.859720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.859746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.859872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.859899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.860030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.860057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.860212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.860240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.860411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.860440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.860570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.860599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.860754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.860781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.299 qpair failed and we were unable to recover it. 00:34:20.299 [2024-07-25 05:54:13.860932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.299 [2024-07-25 05:54:13.860959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.861137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.861164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.861340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.861367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.861544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.861571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.861720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.861747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.861894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.861925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.862053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.862080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.862280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.862321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.862506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.862535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.862664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.862693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.862810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.862838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.862960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.862988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.863145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.863173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.863333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.863361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.863539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.863566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.863719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.863748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.863872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.863899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.864053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.864080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.864229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.864262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.864425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.864452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.864601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.864628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.864775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.864802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.864930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.864958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.865078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.865105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.865283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.865311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.865495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.865522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.865644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.865671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.865821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.865849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.866025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.866052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.866199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.866226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.866377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.866404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.866559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.866586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.866755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.866782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.866927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.866954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.867089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.867119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.867283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.867311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.867442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.867470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.867624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.867651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.867801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.867829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.867976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.868003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.868178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.868207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.868410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.868438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.868559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.868586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.868717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.868745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.868922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.868950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.869075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.869107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.869230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.869264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.869415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.869442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.869619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.869646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.869827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.869853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.870033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.870060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.870219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.870251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.300 qpair failed and we were unable to recover it. 00:34:20.300 [2024-07-25 05:54:13.870398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.300 [2024-07-25 05:54:13.870424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.870598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.870625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.870772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.870798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.870951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.870978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.871125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.871151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.871300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.871328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.871471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.871499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.871654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.871681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.871846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.871874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.872014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.872042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.872167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.872195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.872348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.872376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.872501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.872528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.872680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.872707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.872864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.872891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.873035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.873062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.873184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.873211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.873363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.873390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.873511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.873536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.873687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.873714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.873843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.873870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.874018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.874045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.874192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.874219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.874375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.874402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.874574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.874602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.874747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.874774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.874899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.874926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.875082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.875108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.875260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.875288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.875467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.875494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.875635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.875662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.875792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.875819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.875946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.875973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.876094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.876125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.876249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.876277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.876434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.876460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.876591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.876618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.876792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.876819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.876959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.876986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.877103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.877130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.877248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.877275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.877452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.877478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.877622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.877649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.301 [2024-07-25 05:54:13.877774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.301 [2024-07-25 05:54:13.877800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.301 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.877946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.877973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.878123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.878150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.878318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.878345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.878501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.878528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.878649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.878676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.878854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.878881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.879024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.879050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.879196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.879223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.879384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.879413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.879565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.879592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.879720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.879746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.879899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.879927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.880055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.880083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.880267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.880294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.880442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.880468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.880603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.880629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.880780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.880807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.880932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.880959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.881080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.881107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.881255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.881283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.881397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.881424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.881568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.881596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.881721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.881749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.302 [2024-07-25 05:54:13.881895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.302 [2024-07-25 05:54:13.881922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.302 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.882069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.882096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.882220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.882252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.882385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.882412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.882587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.882613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.882773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.882799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.882950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.882981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.883127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.883154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.883303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.883331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.883483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.883510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.883637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.883663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.883838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.883865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.884015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.884043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.884219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.884251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.884407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.884435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.884582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.884610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.884786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.884813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.884963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.884991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.885144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.885171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.885329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.885356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.885534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.885561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.885721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.885748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.885901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.885929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.886104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.886131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.886316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.886343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.886467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.886494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.886671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.886698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.886848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.886875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.887028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.887055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.887180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.887208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.887367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.887394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.887541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.887568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.887720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.887747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.887924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.303 [2024-07-25 05:54:13.887952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.303 qpair failed and we were unable to recover it. 00:34:20.303 [2024-07-25 05:54:13.888132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.888158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.888290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.888318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.888473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.888500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.888649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.888676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.888815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.888842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.888990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.889017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.889137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.889164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.889308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.889336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.889512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.889539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.889668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.889694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.889827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.889853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.890028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.890055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.890204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.890236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.890370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.890397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.890575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.890602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.890749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.890776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.890954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.890981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.891127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.891153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.891274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.891301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.891425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.891452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.891576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.891603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.891777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.891804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.891951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.891978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.892101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.892128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.892305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.892332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.892499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.892526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.892681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.892708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.892888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.892914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.893068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.893095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.893265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.893293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.893468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.893495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.893673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.893700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.893845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.893871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.894022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.894050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.894211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.894238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.894396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.894423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.894611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.894638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.894792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.894821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.894974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.895001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.895129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.895158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.895314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.895343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.895495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.895522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.895677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.895703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.895824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.895851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.896001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.896028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.896173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.896200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.896384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.896411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.896558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.896586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.896730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.896757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.896903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.896930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.897050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.897077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.897210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.897237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.897360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.897391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.897538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.897565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.304 qpair failed and we were unable to recover it. 00:34:20.304 [2024-07-25 05:54:13.897712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.304 [2024-07-25 05:54:13.897739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.897864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.897891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.898016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.898043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.898170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.898196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.898371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.898399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.898554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.898580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.898756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.898784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.898939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.898965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.899089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.899116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.899248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.899274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.899411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.899437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.899607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.899634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.899770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.899798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.899979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.900006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.900156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.900183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.900319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.900346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.900499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.900526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.900682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.900709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.900889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.900916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.901094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.901120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.901270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.901298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.901420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.901448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.901574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.901602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.901718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.901745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.901916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.901943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.902125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.902153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.902329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.902356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.902504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.902532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.902716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.902742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.902893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.902920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.903095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.903121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.903255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.903283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.903440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.903467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.903646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.903673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.903831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.903857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.904005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.904031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.904186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.904213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.904348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.904375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.904521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.904553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.904705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.904732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.904865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.904892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.905010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.905038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.905216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.905248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.905422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.905449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.905603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.905630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.905755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.905782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.905909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.905937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.906085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.906112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.906289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.906316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.906441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.906467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.906615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.906642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.906819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.906846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.906978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.907005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.907186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.907213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.907364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.907391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.907543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.907570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.907719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.907746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.907896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.907924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.908079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.908106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.908264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.908291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.908448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.908475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.908606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.908634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.908786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.908813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.305 [2024-07-25 05:54:13.908966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.305 [2024-07-25 05:54:13.908992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.305 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.909114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.909142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.909291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.909319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.909472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.909500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.909660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.909687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.909802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.909829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.910005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.910033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.910178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.910205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.910365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.910393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.910559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.910586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.910718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.910745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.910884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.910910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.911065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.911092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.911220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.911251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.911375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.911404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.911557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.911588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.911764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.911791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.911944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.911970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.912090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.912117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.912253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.912280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.912397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.912423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.912547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.912573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.912727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.912754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.306 [2024-07-25 05:54:13.912905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.306 [2024-07-25 05:54:13.912932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.306 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.913085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.913112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.913246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.913274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.913400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.913427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.913556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.913583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.913723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.913752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.913883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.913911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.914066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.914094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.914234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.914284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.914441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.914472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.914607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.914635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.914811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.914840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.915025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.915054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.915179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.915206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 05:54:13.915361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 05:54:13.915389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.915513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.915541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.915665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.915692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.915816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.915843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.915976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.916003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.916141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.916167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.916328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.916356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.916514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.916541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.916694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.916722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.916853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.916880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.917028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.917054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.917257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.917285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.917411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.917438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.917620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.917647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.917764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.917790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.917915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.917944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.918098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.918125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.918255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.918283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.918401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.918432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.918550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.918577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.918730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.918757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.918888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.918915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.919063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.919090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.919233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.919265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.919388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.919414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.919529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.919556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.919709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.919736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.919861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.919889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.920064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.920091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.920249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.920278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.920406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.920433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.920585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.920613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.920767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.920795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.920946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.920974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.921122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.921148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.921300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.921328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.921467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.921494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.921650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 05:54:13.921676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 05:54:13.921835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.921863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.922038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.922065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.922238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.922270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.922397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.922424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.922535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.922563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.922744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.922770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.922947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.922973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.923156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.923183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.923342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.923369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.923527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.923554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.923699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.923725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.923879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.923905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.924026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.924053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.924203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.924230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.924387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.924414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.924540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.924567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.924741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.924767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.924884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.924911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.925032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.925058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.925208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.925235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.925366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.925400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.925566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.925593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.925726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.925752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.925897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.925924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.926046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.926073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.926231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.926273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.926421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.926448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.926560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.926586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.926762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.926788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.926936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.926963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.927109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.927136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.927270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.927298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.927451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.927478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.927596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.927622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.927757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.927783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.927947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.927973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.928119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.928146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 05:54:13.928296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 05:54:13.928323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.928454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.928482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.928633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.928659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.928813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.928840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.928992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.929019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.929145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.929171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.929370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.929398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.929551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.929578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.929734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.929760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.929937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.929963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.930088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.930115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.930271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.930299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.930451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.930476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.930628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.930654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.930807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.930834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.931007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.931033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.931157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.931185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.931341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.931369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.931497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.931523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.931647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.931674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.931799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.931825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.931972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.931998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.932145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.932172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.932331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.932359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.932490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.932516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.932669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.932695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.932840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.932867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.933016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.933042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.933185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.933211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.933373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.933400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.933553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.933579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.933728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.933755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.933887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.933914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.934089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.934115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.934291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.934318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.934500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.934527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.934676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.934703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.934884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.934910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 05:54:13.935088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 05:54:13.935113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.935262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.935289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.935433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.935461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.935640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.935667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.935833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.935859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.936012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.936039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.936191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.936217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.936375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.936401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.936540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.936566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.936704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.936731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.936908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.936933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.937103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.937130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.937275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.937307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.937439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.937467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.937620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.937647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.937831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.937858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.938037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.938064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.938189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.938215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.938397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.938424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.938542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.938569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.938713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.938738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.938851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.938878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.939026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.939052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.939200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.939226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.939387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.939414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.939541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.939568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.939724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.939751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.939909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.939936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.940057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.940084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.940261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.940288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.940414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.940439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.940589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.940616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.940792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.940818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.940979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.941005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.941186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.941212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.941369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.941396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.941544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.941571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.941710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 05:54:13.941738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 05:54:13.941891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.941917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.942078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.942104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.942274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.942315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.942477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.942506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.942656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.942683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.942810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.942837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.942960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.942987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.943150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.943177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.943328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.943356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.943512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.943539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.943685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.943712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.943857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.943884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.944068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.944095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.944252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.944280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.944410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.944452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.944606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.944634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.944784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.944811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.944937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.944966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.945147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.945174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.945353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.945381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.945535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.945563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.945717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.945746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.945899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.945926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.946079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.946108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.946293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.946321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.946472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.946498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.946650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.946677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.946830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.946857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.947010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.947037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.947156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.947183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 05:54:13.947334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 05:54:13.947362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.947510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.947536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.947684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.947712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.947890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.947917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.948041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.948068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.948226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.948257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.948384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.948411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.948591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.948618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.948776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.948803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.948929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.948958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.949137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.949164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.949301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.949329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.949452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.949479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.949659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.949687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.949834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.949861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.950013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.950042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.950164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.950191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.950323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.950351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.950498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.950525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.950647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.950674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.950822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.950849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.951007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.951033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.951184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.951211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.951350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.951377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.951525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.951556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.951676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.951704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.951877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.951905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.952037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.952064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.952211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.952239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.952392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.952419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.952544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.952571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.952748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.952775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.952936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.952963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.953117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.953144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.953278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.953306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.953460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.953486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.953635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.953662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.953840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 05:54:13.953867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 05:54:13.954033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.954060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.954235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.954278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.954408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.954435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.954568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.954596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.954749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.954776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.954905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.954932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.955081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.955108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.955252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.955280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.955437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.955464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.955641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.955669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.955845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.955871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.956023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.956050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.956209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.956236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.956405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.956432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.956586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.956613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.956765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.956791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.956940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.956967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.957116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.957143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.957292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.957320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.957434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.957461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.957615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.957643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.957795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.957823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.958013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.958041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.958208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.958236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.958402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.958429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.958583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.958610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.958763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.958795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.958931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.958959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.959102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.959129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.959254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.959283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.959437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.959463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.959641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.959668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.959822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.959848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.959997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.960025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.960150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.960177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.960322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.960350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.960496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.960523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.960676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 05:54:13.960703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 05:54:13.960829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.960856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.961006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.961033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.961212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.961239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.961398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.961425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.961578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.961605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.961755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.961781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.961928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.961956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.962089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.962117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.962277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.962304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.962453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.962481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.962642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.962670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.962785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.962812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.962941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.962967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.963117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.963143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.963319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.963347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.963470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.963497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.963668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.963695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.963867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.963895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.964043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.964070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.964219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.964250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.964404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.964431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.964583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.964610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.964728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.964755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.964909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.964936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.965081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.965109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.965284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.965311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.965432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.965459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.965635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.965662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.965811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.965842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.966022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.966049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.966175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.966202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.966334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.966361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.966540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.966567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.966686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.966715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.966868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.966895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.967050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.967076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.967204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.967231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 05:54:13.967413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 05:54:13.967439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.967558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.967585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.967740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.967766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.967918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.967945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.968078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.968104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.968296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.968324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.968455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.968482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.968632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.968675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.968804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.968834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.969033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.969060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.969232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.969270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.969466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.969493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.969619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.969646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.969804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.969831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.969971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.970001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.970176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.970203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.970377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.970407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.970568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.970597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.970745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.970771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.970923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.970968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.971134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.971164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.971370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.971398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.971543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.971571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.971683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.971709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.971836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.971862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.972029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.972056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.972222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.972258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.972435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.972469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.972640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.972670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.972872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.972902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.973093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.973121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.973300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 05:54:13.973332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 05:54:13.973502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.973532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.973708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.973735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.973858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.973885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.974058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.974085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.974254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.974282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.974443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.974470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.974620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.974664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.974819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.974846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.974970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.974997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.975199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.975229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.975401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.975428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.975597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.975627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.975797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.975827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.976038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.976065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.976254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.976285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.976454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.976484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.976638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.976666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.976867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.976897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.977033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.977064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.977232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.977267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.977431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.977458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.977596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.977626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.977816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.977843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.978120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.978180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.978346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.978375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.978552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.978579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.978863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.978915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.979104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.979134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.979281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.979309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.979440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.979467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.979644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.979671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.979826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.979853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.979977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.980019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.980188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.980217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.980449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.980491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.980676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.980721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.980901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.980929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 05:54:13.981058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 05:54:13.981087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.981249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.981277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.981445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.981478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.981621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.981666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.981839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.981887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.982092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.982137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.982294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.982323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.982471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.982520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.982708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.982736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.982902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.982947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.983101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.983129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.983331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.983377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.983527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.983555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.983733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.983759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.983890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.983917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.984072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.984099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.984258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.984286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.984442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.984472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.984663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.984710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.984845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.984874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.985028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.985055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.985213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.985250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.985390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.985435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.985652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.985679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.985830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.985857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.985992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.986020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.986147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.986175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.986348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.986377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.986511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.986538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.986737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.986783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.987038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.987097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.987357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.987388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.987559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.987589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.987731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.987761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.987960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.988018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.988183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.988212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.988347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 05:54:13.988376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 05:54:13.988534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.988565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.988823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.988880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.989033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.989060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.989195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.989223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.989386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.989417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.989558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.989588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.989771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.989798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.989953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.989980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.990172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.990203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.990351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.990379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.990548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.990580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.990761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.990788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.990934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.990977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.991137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.991166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.991378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.991406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.991559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.991603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.991777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.991804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.991979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.992006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.992163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.992193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.992355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.992385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.992578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.992606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.992823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.992876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.993099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.993142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.993270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.993297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.993471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.993518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.993783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.993811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.993996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.994024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.994175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.994203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.994353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.994385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.994558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.994587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.994729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.994756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.994887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.994916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.995065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.995108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.995293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.995321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.995466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.995496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.995688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.995717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.995917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.995946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 05:54:13.996089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 05:54:13.996119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.996254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.996282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.996468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.996497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.996720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.996772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.996917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.996945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.997076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.997104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.997284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.997313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.997455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.997502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.997646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.997690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.997978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.998031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.998207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.998235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.998412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.998456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.998634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.998677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.998829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.998873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.999028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.999055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.999207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.999234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.999449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.999494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.999696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.999741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:13.999918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:13.999963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.000117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.000144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.000298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.000326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.000505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.000533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.000736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.000785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.000950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.000980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.001150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.001178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.001336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.001365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.001512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.001560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.001730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.001774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.001954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.001998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.002121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.002148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.002324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.002370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.002509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.002554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.002707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.002734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.002907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.002934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.003109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.003136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.003310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.003338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.003496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.003523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 05:54:14.003670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 05:54:14.003696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.003818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.003845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.004037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.004078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.004231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.004288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.004450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.004480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.004612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.004641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.004808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.004840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.005053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.005080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.005259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.005287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.005407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.005434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.005581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.005610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.005768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.005799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.005955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.006006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.006157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.006201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.006405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.006432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.006615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.006645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.006809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.006838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.007055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.007085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.007263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.007308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.007458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.007485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.007665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.007692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.007813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.007840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.007981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.008011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.008171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.008200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.008403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.008430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.008553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.008579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.008778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.008808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.008962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.008992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.009159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.009189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.009342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.009370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.009503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.009548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.009747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.009790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 05:54:14.010059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 05:54:14.010089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.010232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.010287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.010445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.010472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.010626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.010652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.010853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.010883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.011042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.011069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.011279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.011306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.011459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.011490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.011641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.011671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.011899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.011956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.012125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.012155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.012328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.012356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.012513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.012540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.012712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.012741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.012910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.012939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.013132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.013163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.013341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.013370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.013535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.013564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.013763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.013789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.013943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.013989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.014133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.014160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.014329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.014371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.014503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.014532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.014689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.014717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.014920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.014969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.015088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.015116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.015251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.015280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.015439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.015466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.015644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.015671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.015813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.015861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.016121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.016152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.016331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.016358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.016491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.016518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.016722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.016753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.016918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.016953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.017143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.017173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.017373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.017414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.017567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 05:54:14.017616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 05:54:14.017797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.017842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.018023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.018081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.018239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.018276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.018451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.018496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.018729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.018779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.019087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.019139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.019312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.019358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.019522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.019567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.019778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.019822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.020006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.020050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.020212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.020240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.020445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.020491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.020694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.020738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.020980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.021031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.021179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.021206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.021388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.021433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.021580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.021629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.021822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.021850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.022051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.022095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.022281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.022327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.022515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.022543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.022702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.022747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.023007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.023055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.023237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.023276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.023429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.023474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.023645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.023689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.023866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.023911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.024071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.024098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.024288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.024319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.024505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.024550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.024725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.024769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.024952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.024979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.025100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.025128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.025292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.025324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.025521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.025565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.025737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.025783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.025938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.025966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 05:54:14.026147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 05:54:14.026175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.026344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.026389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.026528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.026572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.026752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.026797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.026949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.026976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.027133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.027160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.027350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.027394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.027570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.027604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.027784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.027831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.028024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.028083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.028298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.028325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.028473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.028517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.028683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.028710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.028889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.028917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.029070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.029098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.029302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.029331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.029507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.029552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.029724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.029769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.029940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.029985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.030132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.030159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.030358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.030405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.030576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.030621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.030822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.030866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.031012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.031039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.031197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.031224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.031408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.031452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.031665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.031714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.031914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.031958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.032090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.032117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.032289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.032320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.032544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.032577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.032772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.032802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.032934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.032964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.033138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.033165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.033342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.033369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.033564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.033593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.033785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.033814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.034024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.034088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 05:54:14.034269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 05:54:14.034312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.034482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.034509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.034701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.034731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.034922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.034952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.035107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.035136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.035348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.035376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.035507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.035549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.035742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.035771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.035961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.035991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.036151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.036181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.036348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.036375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.036506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.036533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.036706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.036732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.036893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.036923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.037086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.037115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.037321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.037348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.037508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.037536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.037673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.037704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.037839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.037869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.038063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.038092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.038255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.038283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.038426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.038453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.038605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.038632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.038781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.038824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.038995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.039039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.039235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.039274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.039427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.039453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.039652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.039682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.039973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.040021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.040192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.040221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.040373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.040400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.040572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.040600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.040785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.040814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.040951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.040980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.041143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.041173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.041338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.041365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.041534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.041563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.041709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.041736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.041882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 05:54:14.041929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 05:54:14.042098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.042128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.042271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.042299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.042456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.042483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.042657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.042687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.042860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.042890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.043079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.043109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.043312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.043340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.043456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.043483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.043652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.043681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.043814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.043843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.043980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.044010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.044170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.044199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.044382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.044409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.044559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.044586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.044760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.044792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.044958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.044988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.045180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.045209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.045382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.045415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.045594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.045623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.045845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.045904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.046059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.046101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.046273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.046317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.046468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.046495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.046644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.046671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.046864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.046894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.047032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.047075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.047268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.047310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.047463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.047490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.047683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.047710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.047885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.047914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.048101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.048131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.048316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.048345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.048497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 05:54:14.048525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 05:54:14.048659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.048687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.048838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.048866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.048998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.049025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.049177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.049204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.049364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.049392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.049568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.049611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.049790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.049817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.049993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.050019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.050216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.050255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.050433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.050463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.050640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.050667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.050873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.050925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.051104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.051131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.051255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.051281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.051472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.051501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.051669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.051698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.051847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.051874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.052066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.052096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.052258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.052288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.052460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.052487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.052662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.052689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.052821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.052848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.053033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.053060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.053228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.053272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.053434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.053465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.053643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.053670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.053877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.053906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.054095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.054121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.054275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.054302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.054470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.054500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.054665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.054695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.054890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.054917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.055093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.055138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.055341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.055369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.055522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.055549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.055687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.055720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.055906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.055936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.056103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 05:54:14.056130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 05:54:14.056268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.056296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.056503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.056533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.056683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.056711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.056864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.056910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.057070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.057100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.057266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.057294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.057453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.057482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.057646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.057676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.057848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.057875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.058020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.058047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.058173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.058200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.058331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.058358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.058509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.058535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.058671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.058701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.058870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.058902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.059057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.059084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.059260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.059288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.059472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.059499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.059668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.059698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.059858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.059888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.060066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.060095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.060312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.060339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.060465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.060492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.060618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.060644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.060821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.060848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.061023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.061052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.061221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.061254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.061435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.061462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.061611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.061642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.061813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.061840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.062009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.062040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.062205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.062234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.062414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.062442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.062597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.062623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.062755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.062781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.062925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.062952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.063156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.063186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.063381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.063411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 05:54:14.063585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 05:54:14.063612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.063733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.063759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.063879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.063906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.064029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.064056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.064170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.064197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.064403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.064433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.064629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.064656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.064811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.064838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.065000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.065029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.065196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.065223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.065356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.065383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.065532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.065559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.065718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.065745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.065896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.065941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.066115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.066145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.066319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.066346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.066491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.066521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.066689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.066724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.066898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.066925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.067097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.067127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.067289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.067319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.067495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.067523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.067699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.067728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.067892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.067922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.068115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.068158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.068340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.068368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.068564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.068593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.068769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.068796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.068944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.068971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.069117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.069144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.069298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.069327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.069526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.069556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.069692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.069722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.069901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.069927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.070096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.070127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.070319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.070350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.070527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.070554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.070759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.070813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 05:54:14.070988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 05:54:14.071018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.071163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.071190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.071338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.071383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.071519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.071549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.071698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.071726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.071856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.071883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.072070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.072101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.072260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.072288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.072473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.072502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.072666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.072695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.072830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.072857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.073031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.073058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.073240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.073284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.073461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.073488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.073660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.073690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.073856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.073885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.074053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.074080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.074278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.074308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.074475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.074505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.074675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.074702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.074875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.074905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.075080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.075111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.075315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.075342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.075479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.075509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.075712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.075741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.075914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.075941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.076115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.076144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.076318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.076356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.076532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.076559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.076748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.076777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.076932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.076961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.077125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.077151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.077321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.077351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 05:54:14.077551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 05:54:14.077578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.077760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.077787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.077994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.078021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.078147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.078174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.078357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.078385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.078560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.078591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.078748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.078778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.078922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.078948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.079095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.079139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.079342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.079372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.079514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.079543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.079745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.079775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.079967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.079997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.080138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.080166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.080345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.080396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.080607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.080635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.080817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.080845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.081057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.081085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.081247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.081276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.081435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.081463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.081613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.081643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.081803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.081833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.082002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.082029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.082180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.082208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.082402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.082429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.082602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.082629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.082851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.082916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.083085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.083114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.083320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.083348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.083501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.083545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.083704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.083733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.083910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.083937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.084111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.084138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.084289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.084320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.084476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.084503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.084675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.084719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.084863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.084893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.085090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.085116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 05:54:14.085251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 05:54:14.085281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.085453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.085480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.085656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.085683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.085926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.085984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.086147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.086176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.086352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.086380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.086578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.086608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.086742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.086772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.086915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.086943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.087092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.087136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.087297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.087328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.087514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.087541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.087715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.087742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.087894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.087924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.088097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.088124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.088275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.088303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.088451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.088479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.088660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.088688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.088926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.088978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.089114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.089143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.089318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.089345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.089522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.089549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.089750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.089780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.089950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.089977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.090145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.090174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.090347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.090378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.090546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.090573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.090753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.090783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.090949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.090979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.091144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.091173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.091316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.091348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.091466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.091493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.091612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.091639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.091781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.091807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.091959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.091988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.092128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.092154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.092279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.092306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.092453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.092479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.092638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 05:54:14.092665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 05:54:14.092820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.092863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.093026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.093056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.093200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.093226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.093404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.093444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.093618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.093650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.093826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.093854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.094067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.094128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.094321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.094352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.094528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.094555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.094706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.094752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.094896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.094927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.095070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.095098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.095293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.095324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.095456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.095486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.095660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.095687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.095835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.095862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.096009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.096054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.096228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.096261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.096392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.096425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.096608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.096636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.096823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.096850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.097011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.097038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.097228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.097267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.097407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.097434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.097584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.097611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.097814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.097843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.097987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.098014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.098159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.098186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.098374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.098402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.098538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.098565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.098730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.098760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.098924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.098954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.099158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.099185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.099318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.099347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.099477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.099505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.099654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.099681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.099807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.099835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 05:54:14.099987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 05:54:14.100014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.100163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.100191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.100363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.100394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.100521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.100551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.100722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.100750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.100912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.100942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.101087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.101118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.101293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.101322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.101512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.101555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.101696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.101726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.101909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.101936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.102052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.102095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.102261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.102295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.102491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.102518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.102698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.102755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.102935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.102962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.103094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.103120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.103300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.103327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.103472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.103499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.103682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.103709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.103986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.104046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.104251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.104278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.104463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.104489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.104631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.104662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.104791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.104821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.105019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.105045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.105214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.105252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.105396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.105423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.105568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.105594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.105763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.105790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.105979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.106008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.106177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.106205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.106382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.106409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.106573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.106599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.106748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.106774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.106974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.107008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.107167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.107196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.107381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.107408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.107574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 05:54:14.107605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 05:54:14.107795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.107824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.108026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.108053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.108219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.108256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.108408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.108435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.108619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.108646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.108827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.108894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.109091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.109120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.109267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.109294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.109452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.109478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.109629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.109657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.109834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.109860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.110060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.110088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.110226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.110264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.110413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.110441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.110610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.110639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.110771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.110800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.111000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.111027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.111176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.111221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.111387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.111413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.111560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.111586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.111753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.111782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.111922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.111950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.112121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.112147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.112297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.112323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.112506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.112550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.112720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.112746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.112872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.112898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.113068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.113098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.113267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.113296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.113462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.113491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.113685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.113714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.113854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.113881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 05:54:14.114003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 05:54:14.114028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.114155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.114181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.114305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.114332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.114499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.114528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.114767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.114796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.114993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.115019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.115170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.115213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.115459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.115489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.115693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.115719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.115850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.115879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.116073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.116102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.116270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.116307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.116438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.116480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.116674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.116703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.116881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.116907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.117062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.117105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.117276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.117307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.117463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.117490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.117642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.117669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.117802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.117828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.117973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.117999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.118116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.118160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.118361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.118388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.118542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.118569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.118767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.118796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.118987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.119016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.119204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.119233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.119432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.119458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.119600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.119629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.119824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.119850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.119993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.120022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.120184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.120214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.120387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.120417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.120570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.120596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.120719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.120746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.120900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.120926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.121064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.121093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.121255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.121285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 05:54:14.121492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 05:54:14.121519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.121710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.121739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.121904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.121933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.122069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.122095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.122222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.122256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.122434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.122463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.122621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.122647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.122822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.122848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.123025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.123054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.123233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.123279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.123437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.123464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.123618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.123659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.123800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.123828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.123976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.124018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.124187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.124215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.124389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.124416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.124609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.124638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.124797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.124826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.125023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.125050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.125240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.125278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.125460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.125488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.125657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.125684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.125811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.125854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.126032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.126060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.126230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.126264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.126398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.126424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.126554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.126580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.126758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.126785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.126961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.126991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.127159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.127188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.127364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.127391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.127587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.127616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.127787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.127816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.127983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.128009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.128201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.128231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.128407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.128437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.128592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.128618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.128740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.128766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.128886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.618 [2024-07-25 05:54:14.128912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.618 qpair failed and we were unable to recover it. 00:34:20.618 [2024-07-25 05:54:14.129070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.129096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.129249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.129279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.129407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.129436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.129606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.129632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.129786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.129812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.129989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.130018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.130187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.130213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.130394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.130424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.130599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.130628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.130827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.130853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.131025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.131054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.131267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.131297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.131446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.131471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.131634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.131659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.131821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.131863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.132032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.132059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.132219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.132254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.132450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.132479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.132645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.132671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.132841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.132869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.133032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.133060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.133205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.133231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.133406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.133435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.133601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.133633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.133829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.133855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.134070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.134128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.134308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.134338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.134512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.134538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.134704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.134733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.134877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.134905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.135088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.135117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.135267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.135310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.135467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.135493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.135648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.135674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.135811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.135839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.136008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.136037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.136210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.136236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.136401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.136427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.619 qpair failed and we were unable to recover it. 00:34:20.619 [2024-07-25 05:54:14.136588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.619 [2024-07-25 05:54:14.136614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.136791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.136817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.136964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.136993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.137141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.137169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.137346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.137373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.137520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.137546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.137727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.137753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.137930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.137956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.138129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.138159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.138327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.138357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.138527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.138553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.138815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.138867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.139022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.139051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.139228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.139266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.139395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.139421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.139596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.139639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.139817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.139843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.139996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.140022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.140175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.140201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.140332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.140359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.140514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.140540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.140688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.140732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.140900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.140925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.141094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.141123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.141285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.141315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.141515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.141541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.141687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.141720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.141858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.141888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.142115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.142144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.142319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.142347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.142549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.142578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.142728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.142754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.142903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.142931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.143101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.143131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.143302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.143329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.143446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.143472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.143642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.143670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.143820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.143847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.620 [2024-07-25 05:54:14.143970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.620 [2024-07-25 05:54:14.143996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.620 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.144202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.144230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.144432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.144458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.144650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.144678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.144840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.144868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.145035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.145062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.145230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.145268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.145398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.145426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.145599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.145625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.145789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.145818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.145955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.145983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.146146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.146173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.146370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.146400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.146548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.146577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.146726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.146753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.146900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.146931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.147052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.147079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.147225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.147259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.147459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.147500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.147640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.147669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.147871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.147898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.148091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.148119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.148298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.148325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.148476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.148503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.148647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.148676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.148857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.148887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.149086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.149112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.149289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.149319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.149490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.149519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.149671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.149698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.149817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.149846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.150043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.150072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.621 [2024-07-25 05:54:14.150255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.621 [2024-07-25 05:54:14.150294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.621 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.150466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.150494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.150689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.150717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.150896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.150924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.151103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.151132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.151274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.151309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.151490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.151517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.151640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.151666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.151821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.151848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.152034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.152061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.152177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.152220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.152406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.152435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.152613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.152639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.152778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.152807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.152983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.153010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.153139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.153166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.153342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.153371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.153529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.153559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.153754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.153780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.153975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.154005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.154180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.154209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.154401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.154428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.154572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.154599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.154801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.154830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.154970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.155002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.155144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.155187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.155396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.155423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.155549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.155576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.155771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.155800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.155945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.155974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.156138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.156168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.156367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.156393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.156555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.156584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.156733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.156760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.622 [2024-07-25 05:54:14.156919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.622 [2024-07-25 05:54:14.156947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.622 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.157097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.157123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.157275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.157303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.157436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.157462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.157679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.157705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.157854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.157881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.158064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.158092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.158289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.158319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.158512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.158538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.158716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.158744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.158911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.158940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.159132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.159160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.159311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.159356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.159553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.159583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.159781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.159808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.159983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.160012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.160181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.160210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.160400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.160431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.160665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.160718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.160910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.160939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.161115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.161142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.161271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.161314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.161505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.161537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.161713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.161741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.161920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.161964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.162124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.162154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.162338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.162365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.162521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.162563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.162695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.162726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.162927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.162954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.163134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.163170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.163379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.163406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.163595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.163622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.163772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.163801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.163961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.163990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.164170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.164196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.623 [2024-07-25 05:54:14.164382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.623 [2024-07-25 05:54:14.164409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.623 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.164579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.164608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.164802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.164829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.164981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.165008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.165200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.165229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.165437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.165463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.165600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.165626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.165802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.165845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.165995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.166022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.166178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.166220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.166421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.166447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.166576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.166602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.166747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.166791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.166962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.166991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.167189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.167215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.167387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.167413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.167584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.167614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.167814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.167840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.168054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.168083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.168255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.168308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.168438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.168463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.168664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.168693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.168895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.168926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.169091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.169135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.169329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.169356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.169504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.169530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.169707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.169734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.169874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.169903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.170038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.170067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.170221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.170255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.170441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.170467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.170602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.170631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.170803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.170830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.170992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.171021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.171159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.171188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.171363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.171390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.171555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.171586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.171760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.171793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.171959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.624 [2024-07-25 05:54:14.171986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.624 qpair failed and we were unable to recover it. 00:34:20.624 [2024-07-25 05:54:14.172109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.172135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.172288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.172314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.172434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.172459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.172624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.172651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.172800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.172827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.172980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.173006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.173143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.173173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.173362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.173392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.173540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.173568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.173689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.173717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.173898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.173928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.174103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.174130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.174298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.174328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.174490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.174520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.174668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.174695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.174853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.174879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.175054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.175083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.175285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.175313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.175457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.175488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.175629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.175657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.175821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.175846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.175958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.175984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.176135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.176161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.176318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.176344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.176540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.176569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.176735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.176763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.176971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.176998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.177141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.177170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.177346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.177373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.177518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.177545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.177797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.177853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.625 [2024-07-25 05:54:14.178046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.625 [2024-07-25 05:54:14.178074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.625 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.178257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.178295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.178497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.178527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.178694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.178723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.178896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.178922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.179076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.179102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.179257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.179295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.179456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.179483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.179627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.179656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.179815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.179844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.179977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.180003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.180125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.180152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.180304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.180333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.180510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.180536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.180688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.180713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.180826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.180852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.180998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.181023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.181141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.181167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.181322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.181364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.181564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.181590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.181755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.181788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.181992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.182021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.182165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.182191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.182342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.182370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.182516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.182559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.182742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.182768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.182914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.182940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.183060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.183086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.183258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.183294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.183443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.183469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.183590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.183617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.183740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.183766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.626 [2024-07-25 05:54:14.183959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.626 [2024-07-25 05:54:14.183987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.626 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.184144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.184173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.184354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.184381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.184536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.184563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.184761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.184791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.184986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.185012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.185144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.185172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.185336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.185363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.185517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.185543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.185730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.185759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.185920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.185948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.186118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.186145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.186342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.186372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.186529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.186558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.186736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.186762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.186941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.186967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.187150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.187179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.187329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.187356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.187584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.187638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.187832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.187860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.188045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.188072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.188212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.188280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.188463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.188501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.188683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.188710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.188999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.189062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.189232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.189269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.189466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.189504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.189772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.189825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.189969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.189998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.190175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.190201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.190347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.190375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.190511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.190556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.190726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.190754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.190887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.190913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.191068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.191095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.191252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.191279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.191431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.191457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.191658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.191686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.627 [2024-07-25 05:54:14.191838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.627 [2024-07-25 05:54:14.191864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.627 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.192040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.192065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.192213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.192252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.192418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.192444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.192563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.192603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.192801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.192830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.193000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.193028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.193204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.193257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.193429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.193455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.193618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.193644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.193823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.193849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.193971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.193998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.194143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.194170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.194322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.194350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.194498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.194540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.194689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.194715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.194840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.194866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.195047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.195073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.195218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.195257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.195405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.195432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.195611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.195653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.195829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.195855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.196031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.196057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.196258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.196303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.196470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.196496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.196645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.196689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.196880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.196908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.197081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.197107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.197268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.197297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.197489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.197515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.197732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.197758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.197928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.197992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.198164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.198197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.198433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.198460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.198635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.198665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.198858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.628 [2024-07-25 05:54:14.198887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.628 qpair failed and we were unable to recover it. 00:34:20.628 [2024-07-25 05:54:14.199046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.199075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.199253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.199283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.199433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.199459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.199607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.199633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.199805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.199833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.200000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.200029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.200169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.200195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.200350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.200378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.200530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.200557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.200783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.200813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.201126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.201179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.201334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.201360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.201513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.201539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.201689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.201715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.201857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.201882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.202005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.202033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.202201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.202230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.202411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.202438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.202614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.202640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.202883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.202935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.203136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.203162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.203297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.203325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.203479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.203505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.203623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.203650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.203781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.203809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.203973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.204002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.204188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.204217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.204369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.204395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.204539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.204565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.204707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.204736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.204933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.204959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.205151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.205181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.205359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.205386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.205533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.205559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.205760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.205789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.205950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.205978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.206182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.206208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.206346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.206373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.206542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.206572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.206736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.206762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.206887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.206914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.629 [2024-07-25 05:54:14.207037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.629 [2024-07-25 05:54:14.207063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.629 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.207249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.207294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.207484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.207526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.207701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.207730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.207883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.207909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.208055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.208097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.208231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.208283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.208409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.208435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.208579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.208605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.208759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.208789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.208935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.208961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.209147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.209176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.209347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.209374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.209522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.209548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.209714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.209742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.209933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.209962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.210173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.210202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.210350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.210376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.210510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.210553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.210720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.210746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.210872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.210916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.211106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.211135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.211286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.211313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.211446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.211472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.211598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.211625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.211778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.211804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.211920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.211963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.212155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.212184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.212366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.212393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.212593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.212622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.212755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.212784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.212925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.212952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.213078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.213104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.213238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.213273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.213400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.213426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.630 [2024-07-25 05:54:14.213591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.630 [2024-07-25 05:54:14.213620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.630 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.213788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.213822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.213981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.214010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.214173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.214202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.214403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.214430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.214561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.214589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.214787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.214817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.215033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.215060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.215201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.215264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.215434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.215464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.215596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.215625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.215780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.215806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.215964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.216006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.216200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.216228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.216405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.216431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.216622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.216666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.216838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.216867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.217040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.217067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.217239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.217292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.217445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.217473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.217638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.217664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.217837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.217863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.218036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.218062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.218228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.218263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.218426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.218452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.218600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.218626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.218803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.218829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.218973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.219003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.219210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.219237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.219423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.219452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.219619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.219649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.219842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.219869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.220010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.220039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.220238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.220273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.220425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.220451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.220624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.220653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.220792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.220822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.220965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.631 [2024-07-25 05:54:14.220991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.631 qpair failed and we were unable to recover it. 00:34:20.631 [2024-07-25 05:54:14.221138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.221182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.221349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.221376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.221531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.221558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.221734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.221777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.221908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.221941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.222129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.222156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.222363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.222393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.222589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.222617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.222784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.222810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.222989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.223019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.223184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.223214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.223374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.223401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.223546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.223573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.223744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.223774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.223940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.223967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.224120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.224149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.224324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.224350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.224474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.224512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.224711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.224740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.224885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.224915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.225058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.225084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.225235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.225288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.225458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.225487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.225682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.225708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.225906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.225935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.226085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.226113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.226287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.226314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.226486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.226519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.226714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.226742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.226921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.226948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.227144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.227174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.227346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.227380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.227581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.227615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.227891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.227941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.228109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.228137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.228306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.228332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.228487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.228541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.228744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.228773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.228915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.228941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.229073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.229099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.229274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.632 [2024-07-25 05:54:14.229319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.632 qpair failed and we were unable to recover it. 00:34:20.632 [2024-07-25 05:54:14.229468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.229495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.229690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.229719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.229892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.229922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.230098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.230124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.230280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.230332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.230534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.230561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.230712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.230739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.230912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.230941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.231143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.231169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.231314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.231341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.231460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.231485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.231676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.231706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.231849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.231875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.232050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.232077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.232228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.232264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.232448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.232474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.232656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.232685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.232870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.232896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.233028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.233054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.233219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.233252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.233431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.233461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.233635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.233661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.233831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.233860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.234031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.234060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.234256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.234295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.234458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.234488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.234655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.234684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.234854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.234880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.235077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.235106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.235287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.235317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.235492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.235518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.235645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.235677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.235801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.235827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.235976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.236002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.236174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.236204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.236359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.236386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.236534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.236560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.236699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.236727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.236893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.236922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.237073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.237099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.237289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.633 [2024-07-25 05:54:14.237319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.633 qpair failed and we were unable to recover it. 00:34:20.633 [2024-07-25 05:54:14.237475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.237504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.237678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.237705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.237902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.237932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.238112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.238138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.238317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.238345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.238538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.238577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.238750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.238780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.238978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.239004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.239174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.239203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.239363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.239394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.239582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.239615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.239810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.239869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.240035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.240065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.240247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.240274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.240428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.240454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.240643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.240669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.240813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.240840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.241010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.241039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.241212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.241249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.241420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.241446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.241596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.241641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.241816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.241845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.242010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.242037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.242233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.242272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.242414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.242443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.242648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.242675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.242848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.242878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.243043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.243074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.243271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.243310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.243494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.243529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.243690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.243719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.243899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.243926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.244099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.244128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.244296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.244327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.244524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.244551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.244698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.244764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.244942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.244968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.245094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.245121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.245273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.245308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.245459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.245485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.245651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.245678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.634 qpair failed and we were unable to recover it. 00:34:20.634 [2024-07-25 05:54:14.245841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.634 [2024-07-25 05:54:14.245870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.246032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.246062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.246229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.246275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.246446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.246472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.246681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.246710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.246883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.246909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.247137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.247164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.247299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.247327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.247503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.247529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.247674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.247700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.247827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.247853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.248045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.248072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.248226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.248277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.248452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.248481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.248648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.248674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.248826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.248870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.249061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.249090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.249233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.249270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.249428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.249455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.249663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.249692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.249863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.249889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.250061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.250089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.250258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.250288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.250466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.250492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.250619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.250647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.250801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.250830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.635 [2024-07-25 05:54:14.250975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.635 [2024-07-25 05:54:14.251002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.635 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.251166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.251199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.251384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.251411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.251567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.251594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.251741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.251770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.251945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.251993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.252195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.252224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.252390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.252416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.252578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.252607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.252785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.252811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.252926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.252953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.253121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.253151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.253299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.253326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.253452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.253478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.253638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.253681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.253917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.253942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.254138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.254167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.254316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.254343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.254496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.254530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.254708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.254737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.254980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.255008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.255152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.255179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.255320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.255349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.255481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.255524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.255721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.255747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.255890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.255916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.256075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.256101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.256277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.256314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.256466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.256493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.256656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.256686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.256825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.256853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.257007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.257051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.257264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.257312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.257459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.257485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.257707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.257770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.257971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.258000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.258173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.258198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.258374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.636 [2024-07-25 05:54:14.258400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.636 qpair failed and we were unable to recover it. 00:34:20.636 [2024-07-25 05:54:14.258581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.258611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.258763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.258789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.258971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.259000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.259164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.259193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.259350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.259377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.259500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.259536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.259686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.259711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.259841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.259867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.260072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.260101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.260256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.260309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.260455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.260481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.260662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.260691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.260881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.260910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.261060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.261086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.261228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.261281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.261457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.261486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.261695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.261735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.261890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.261918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.262091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.262135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.262303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.262331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.262464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.262490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.262614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.262645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.262797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.262823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.262984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.263010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.263166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.263192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.263342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.263369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.263524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.263566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.263709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.263740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.263919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.263946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.264122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.264148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.264304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.264334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.264528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.264554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.264753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.264782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.264950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.264981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.637 [2024-07-25 05:54:14.265181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.637 [2024-07-25 05:54:14.265219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.637 qpair failed and we were unable to recover it. 00:34:20.917 [2024-07-25 05:54:14.265460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.917 [2024-07-25 05:54:14.265517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.917 qpair failed and we were unable to recover it. 00:34:20.917 [2024-07-25 05:54:14.265746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.265809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.266091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.266145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.266414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.266453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.266646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.266677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.266854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.266881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.267008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.267035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.267166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.267192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.267319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.267346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.267492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.267533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.267697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.267726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.267866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.267892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.268091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.268120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.268300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.268332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.268484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.268519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.268648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.268674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.268827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.268853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.269015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.269042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.269190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.269233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.269412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.269438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.269597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.269624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.269854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.269905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.270095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.270124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.270329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.270356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.270513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.270557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.270727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.270755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.270923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.270950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.271112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.271139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.271294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.271321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.271465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.271491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.271640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.271684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.271851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.271882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.272056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.272084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.272202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.272228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.272420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.272447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.272615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.272642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.918 qpair failed and we were unable to recover it. 00:34:20.918 [2024-07-25 05:54:14.272840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.918 [2024-07-25 05:54:14.272869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.273035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.273065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.273233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.273267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.273430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.273457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.273669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.273699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.273871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.273900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.274022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.274067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.274251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.274279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.274432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.274458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.274582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.274609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.274769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.274796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.274975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.275001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.275149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.275175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.275320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.275347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.275548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.275577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.275745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.275774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.275937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.275965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.276156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.276189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.276368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.276395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.276597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.276626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.276949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.277004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.277195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.277224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.277389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.277415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.277569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.277596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.277774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.277800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.277917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.277943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.278093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.278123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.278311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.278338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.278514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.278558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.278700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.278726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.278926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.278954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.279127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.279156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.279350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.279377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.279497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.279524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.279752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.279778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.279942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.279968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.280163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.280192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.919 qpair failed and we were unable to recover it. 00:34:20.919 [2024-07-25 05:54:14.280378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.919 [2024-07-25 05:54:14.280405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.280581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.280610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.280804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.280833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.280996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.281025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.281187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.281214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.281397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.281423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.281544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.281571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.281788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.281863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.282130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.282181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.282359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.282387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.282574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.282618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.282827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.282871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.283044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.283088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.283226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.283261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.283450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.283478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.283700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.283744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.283898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.283924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.284079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.284107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.284259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.284309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.284483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.284535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.284738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.284785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.284967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.285012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.285166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.285193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.285363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.285407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.285556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.285586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.285777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.285821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.285945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.285974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.286155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.286181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.286307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.286335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.286535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.286579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.286765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.286810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.287022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.287075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.287199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.287225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.287405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.287452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.287635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.287680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.920 [2024-07-25 05:54:14.287857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.920 [2024-07-25 05:54:14.287901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.920 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.288054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.288080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.288230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.288264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.288450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.288496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.288668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.288713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.288902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.288946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.289100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.289127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.289302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.289352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.289561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.289603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.289770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.289813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.289951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.289997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.290157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.290183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.290386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.290430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.290615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.290647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.290787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.290817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.290985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.291015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.291174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.291203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.291394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.291422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.291592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.291621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.291780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.291810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.291959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.292001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.292183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.292209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.292368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.292395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.292541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.292570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.292857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.292911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.293078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.293107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.293257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.293288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.293425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.293451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.293608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.293636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.293791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.293834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.294025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.294054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.294211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.294248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.294427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.294453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.294612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.294641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.294816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.294858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.921 qpair failed and we were unable to recover it. 00:34:20.921 [2024-07-25 05:54:14.295113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.921 [2024-07-25 05:54:14.295165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.295321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.295348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.295525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.295551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.295715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.295776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.296089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.296153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.296313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.296340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.296520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.296564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.296831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.296882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.297083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.297112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.297281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.297324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.297456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.297484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.297694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.297723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.298022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.298079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.298253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.298291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.298420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.298447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.298659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.298688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.298952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.299003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.299169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.299198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.299384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.299411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.299566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.299592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.299739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.299765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.299944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.299972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.300125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.300155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.300312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.300339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.300503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.300529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.300659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.300685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.300805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.300831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.301007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.301033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.301199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.301240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.922 qpair failed and we were unable to recover it. 00:34:20.922 [2024-07-25 05:54:14.301409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.922 [2024-07-25 05:54:14.301436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.301610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.301639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.301797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.301826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.302022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.302051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.302211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.302240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.302438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.302464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.302676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.302704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.302838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.302866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.303033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.303059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.303236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.303298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.303475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.303513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.303725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.303760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.303930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.303960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.304159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.304189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.304367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.304393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.304544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.304571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.304717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.304747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.304928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.304955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.305130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.305159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.305318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.305345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.305492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.305524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.305650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.305676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.305862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.305905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.306069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.306098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.306271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.306315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.306438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.306465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.306613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.306640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.306820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.306864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.307064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.307090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.307260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.307307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.307489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.307533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.307696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.307724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.307874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.307900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.308044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.308070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.308299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.308326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.308453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.923 [2024-07-25 05:54:14.308480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.923 qpair failed and we were unable to recover it. 00:34:20.923 [2024-07-25 05:54:14.308633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.308675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.308884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.308910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.309100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.309129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.309260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.309305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.309427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.309455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.309635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.309662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.309827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.309855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.310030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.310063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.310238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.310273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.310452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.310478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.310651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.310681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.310876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.310902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.311069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.311098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.311268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.311313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.311461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.311487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.311681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.311710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.311877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.311903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.312079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.312105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.312272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.312318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.312461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.312499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.312710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.312737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.312888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.312914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.313074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.313100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.313295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.313322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.313441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.313469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.313645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.313674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.313821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.313847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.314062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.314090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.314296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.314323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.314475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.314512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.314645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.314674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.314805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.314834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.315024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.315053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.315220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.315255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.315428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.315455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.315639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.315665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.315831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.315860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.924 qpair failed and we were unable to recover it. 00:34:20.924 [2024-07-25 05:54:14.316023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.924 [2024-07-25 05:54:14.316053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.316223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.316259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.316423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.316449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.316679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.316705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.316855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.316880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.317077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.317106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.317276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.317309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.317502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.317528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.317697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.317726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.317859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.317888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.318050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.318076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.318251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.318304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.318433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.318459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.318579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.318605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.318760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.318807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.318974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.319004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.319146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.319174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.319342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.319370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.319521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.319547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.319727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.319756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.319912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.319940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.320097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.320126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.320305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.320332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.320474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.320527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.320722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.320751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.321009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.321038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.321206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.321235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.321423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.321450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.321605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.321631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.321784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.321810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.322003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.322032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.322249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.322306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.322438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.322465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.322665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.322694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.322861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.322887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.323052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.323081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.925 [2024-07-25 05:54:14.323254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.925 [2024-07-25 05:54:14.323297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.925 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.323427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.323454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.323630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.323663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.323821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.323849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.324145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.324195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.324398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.324425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.324620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.324648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.324905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.324955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.325141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.325170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.325405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.325432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.325584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.325610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.325765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.325792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.325936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.325962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.326131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.326171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.326363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.326390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.326512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.326538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.326735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.326775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.326954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.327010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.327174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.327219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.327415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.327442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.327610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.327639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.327822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.327849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.328027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.328054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.328207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.328234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.328402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.328428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.328567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.328615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.328784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.328827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.329028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.329073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.329254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.329281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.329457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.329488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.329666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.329709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.329945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.329996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.330144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.330171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.330343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.330370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.330506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.330551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.330749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.330793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.330949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.330992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.926 [2024-07-25 05:54:14.331181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.926 [2024-07-25 05:54:14.331208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.926 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.331377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.331420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.331597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.331640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.331839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.331885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.332036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.332062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.332210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.332237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.332397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.332440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.332643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.332687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.332890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.332933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.333089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.333117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.333316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.333361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.333512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.333539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.333686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.333713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.333887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.333913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.334070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.334098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.334298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.334325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.334526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.334569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.334732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.334777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.334926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.334953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.335130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.335157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.335337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.335381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.335509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.335537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.335737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.335791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.335983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.336022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.336202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.336229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.336406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.336450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.336627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.336671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.336828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.336856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.337033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.337059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.337238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.337283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.337459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.927 [2024-07-25 05:54:14.337506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.927 qpair failed and we were unable to recover it. 00:34:20.927 [2024-07-25 05:54:14.337678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.337722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.337872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.337920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.338034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.338060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.338238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.338271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.338488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.338535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.338759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.338804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.338977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.339021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.339198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.339225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.339399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.339425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.339602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.339646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.339822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.339864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.340040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.340084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.340263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.340302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.340452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.340507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.340692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.340738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.340895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.340940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.341088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.341115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.341273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.341299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.341447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.341504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.341655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.341700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.341911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.341941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.342114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.342141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.342306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.342351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.342557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.342601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.342747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.342791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.342944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.342971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.343125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.343151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.343319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.343363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.343543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.343591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.343795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.343839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.343994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.344021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.344172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.344198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.344398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.344427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.344629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.344673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.344850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.344896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.345046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.928 [2024-07-25 05:54:14.345073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.928 qpair failed and we were unable to recover it. 00:34:20.928 [2024-07-25 05:54:14.345200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.345228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.345429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.345457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.345636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.345679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.345852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.345896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.346050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.346077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.346267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.346304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.346473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.346525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.346671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.346700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.346919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.346963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.347122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.347148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.347315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.347360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.347568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.347610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.347899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.347942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.348065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.348102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.348313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.348358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.348557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.348601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.348914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.348973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.349127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.349154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.349351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.349395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.349561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.349606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.349775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.349819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.349997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.350041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.350196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.350223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.350422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.350468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.350675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.350720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.350901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.350944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.351126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.351152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.351324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.351369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.351577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.351620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.351794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.351840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.351999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.352027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.352150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.352177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.352349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.352394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.352589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.352619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.929 [2024-07-25 05:54:14.352891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.929 [2024-07-25 05:54:14.352944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.929 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.353083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.353109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.353232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.353267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.353442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.353469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.353645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.353675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.353849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.353878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.354054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.354083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.354222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.354256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.354387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.354413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.354581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.354610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.354762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.354805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.354966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.354995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.355173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.355203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.355423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.355451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.355588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.355614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.355760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.355786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.355985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.356014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.356181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.356210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.356396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.356423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.356574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.356600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.356751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.356777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.356997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.357052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.357239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.357271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.357453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.357479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.357664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.357694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.357973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.358028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.358218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.358255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.358413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.358440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.358575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.358601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.358748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.358791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.358954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.358983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.359153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.359182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.359352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.359380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.359545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.359574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.359734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.359763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.930 qpair failed and we were unable to recover it. 00:34:20.930 [2024-07-25 05:54:14.359955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.930 [2024-07-25 05:54:14.359985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.360131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.360161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.360323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.360350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.360531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.360564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.360715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.360745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.361034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.361086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.361259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.361287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.361419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.361446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.361602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.361629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.361773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.361802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.361972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.362015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.362236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.362286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.362444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.362469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.362619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.362645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.362895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.362923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.363118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.363146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.363325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.363352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.363502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.363533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.363677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.363704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.363831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.363857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.364033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.364062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.364219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.364260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.364431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.364459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.364643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.364670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.364824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.364854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.365015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.365045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.365210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.365236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.365424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.365450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.365596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.365626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.365796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.365822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.366016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.366045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.366209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.366239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.366451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.366478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.366682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.366712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.366884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.366913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.367042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.367072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.931 [2024-07-25 05:54:14.367265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.931 [2024-07-25 05:54:14.367317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.931 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.367443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.367468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.367624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.367651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.367848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.367877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.368045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.368075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.368260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.368286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.368438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.368464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.368600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.368626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.368790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.368820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.369005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.369035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.369200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.369231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.369414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.369440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.369560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.369588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.369748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.369778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.369947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.369976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.370141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.370170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.370345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.370372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.370549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.370575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.370749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.370779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.370925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.370955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.371143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.371173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.371358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.371385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.371548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.371583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.371781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.371807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.371999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.372029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.372161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.372192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.372374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.372400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.372563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.372593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.372827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.372877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.373048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.373074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.932 [2024-07-25 05:54:14.373205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.932 [2024-07-25 05:54:14.373252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.932 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.373456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.373485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.373652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.373679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.373849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.373879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.374040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.374069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.374210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.374237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.374383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.374410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.374560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.374598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.374747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.374774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.374968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.374997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.375136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.375166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.375363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.375390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.375565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.375605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.375752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.375782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.375953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.375980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.376121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.376149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.376306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.376349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.376553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.376579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.376748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.376778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.376945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.376974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.377147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.377174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.377376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.377405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.377575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.377602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.377776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.377803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.377970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.378001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.378134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.378164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.378338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.378364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.378488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.378514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.378670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.378697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.378848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.378874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.379049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.379076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.933 qpair failed and we were unable to recover it. 00:34:20.933 [2024-07-25 05:54:14.379208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.933 [2024-07-25 05:54:14.379237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.379438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.379464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.379616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.379664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.379825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.379855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.380051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.380077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.380255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.380296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.380462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.380491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.380664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.380690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.380806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.380832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.380982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.381009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.381168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.381194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.381335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.381361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.381537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.381566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.381762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.381788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.381943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.381973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.382179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.382208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.382375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.382402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.382549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.382591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.382837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.382889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.383053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.383080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.383220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.383256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.383382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.383408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.383540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.383566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.383696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.383723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.383844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.383870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.384025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.384052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.384215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.384251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.384454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.384480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.384657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.384684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.384804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.384834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.384960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.384986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.385138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.385166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.385345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.385375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.385540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.385570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.385735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.385762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.385918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.385945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.386121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.386150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.934 [2024-07-25 05:54:14.386301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.934 [2024-07-25 05:54:14.386329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.934 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.386531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.386560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.386707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.386736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.386934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.386961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.387157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.387187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.387362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.387389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.387513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.387540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.387669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.387713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.387848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.387878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.388022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.388049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.388191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.388218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.388388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.388414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.388593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.388620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.388768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.388795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.388963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.388993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.389223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.389273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.389447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.389474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.389642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.389672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.389841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.389868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.390062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.390091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.390237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.390298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.390478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.390504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.390672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.390702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.390904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.390931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.391088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.391116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.391331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.391358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.391526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.391557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.391746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.391834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.391995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.392024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.392190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.392218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.392430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.392456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.392653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.392683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.392873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.392902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.393080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.393110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.393307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.393337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.393471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.393510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.393682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.393709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.935 qpair failed and we were unable to recover it. 00:34:20.935 [2024-07-25 05:54:14.393853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.935 [2024-07-25 05:54:14.393897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.394021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.394050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.394220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.394253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.394408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.394435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.394617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.394647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.394816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.394843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.395039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.395068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.395249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.395277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.395459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.395485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.395654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.395683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.395842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.395907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.396074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.396104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.396278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.396305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.396453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.396480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.396632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.396659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.396857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.396886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.397051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.397080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.397277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.397304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.397435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.397464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.397624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.397654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.397858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.397885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.398086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.398113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.398269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.398304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.398435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.398465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.398654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.398683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.398818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.398847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.399018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.399046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.399193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.399224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.399400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.399429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.399603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.399630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.399781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.399811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.400005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.400034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.936 qpair failed and we were unable to recover it. 00:34:20.936 [2024-07-25 05:54:14.400182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.936 [2024-07-25 05:54:14.400208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.400341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.400384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.400552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.400582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.400753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.400780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.400892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.400918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.401067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.401094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.401268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.401302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.401424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.401450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.401605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.401633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.401788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.401814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.401962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.401989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.402142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.402186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.402363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.402391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.402524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.402569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.402733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.402763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.402900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.402927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.403105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.403131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.403339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.403365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.403510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.403537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.403700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.403727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.403904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.403931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.404074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.404103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.404304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.404331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.404479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.404505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.404694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.404720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.404908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.404937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.405103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.405132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.405296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.405323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.937 [2024-07-25 05:54:14.405477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.937 [2024-07-25 05:54:14.405524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.937 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.405662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.405692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.405892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.405919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.406086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.406115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.406273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.406308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.406453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.406479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.406638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.406683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.406949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.407001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.407172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.407198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.407363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.407389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.407557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.407587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.407759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.407785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.407948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.407977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.408134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.408164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.408342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.408369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.408528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.408571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.408738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.408767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.408963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.408990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.409113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.409139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.409345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.409375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.409527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.409555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.409707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.409734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.409867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.409909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.410112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.410139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.410343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.410372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.410514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.410547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.410728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.410755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.410902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.410929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.411119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.411148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.411302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.411329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.411461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.411487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.411612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.411642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.411817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.411844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.412014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.412043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.412202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.412231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.412401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.412427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.412621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.938 [2024-07-25 05:54:14.412650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.938 qpair failed and we were unable to recover it. 00:34:20.938 [2024-07-25 05:54:14.412808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.412837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.413010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.413037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.413193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.413220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.413390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.413416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.413599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.413626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.413775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.413804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.413964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.413993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.414186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.414213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.414397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.414427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.414562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.414591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.414756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.414782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.414957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.414986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.415142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.415173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.415379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.415406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.415565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.415609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.415763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.415792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.415973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.416000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.416167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.416197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.416383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.416410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.416565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.416592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.416790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.416819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.416944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.416973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.417144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.417170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.417369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.417400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.417558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.417588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.417764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.417791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.417913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.417940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.418115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.418159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.418325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.418353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.418482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.418509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.418665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.418695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.418846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.418874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.419026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.419053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.419248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.419279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.419432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.419460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.939 [2024-07-25 05:54:14.419591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.939 [2024-07-25 05:54:14.419623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.939 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.419819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.419848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.420016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.420043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.420209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.420238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.420398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.420426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.420582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.420609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.420758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.420803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.421000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.421029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.421220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.421277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.421458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.421487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.421651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.421681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.421826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.421852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.422062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.422092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.422232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.422271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.422433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.422460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.422618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.422645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.422816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.422846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.423053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.423080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.423259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.423290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.423487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.423514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.423690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.423717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.423910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.423940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.424104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.424133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.424335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.424363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.424557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.424587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.424758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.424787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.424937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.424964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.425084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.425112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.425294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.425322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.425466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.425493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.425694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.425724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.425892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.425923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.426073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.426100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.426258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.426286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.426438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.940 [2024-07-25 05:54:14.426482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.940 qpair failed and we were unable to recover it. 00:34:20.940 [2024-07-25 05:54:14.426656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.426684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.426851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.426881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.427083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.427113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.427319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.427348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.427536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.427565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.427706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.427737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.427915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.427942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.428086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.428112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.428291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.428320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.428454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.428481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.428629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.428674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.428876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.428903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.429081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.429107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.429286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.429316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.429506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.429536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.429733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.429759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.429916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.429946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.430101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.430131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.430333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.430360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.430564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.430594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.430783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.430813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.430981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.431008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.431123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.431166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.431331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.431363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.431528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.431556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.431726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.431756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.431922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.431951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.432098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.432125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.432281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.432308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.432492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.432521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.432717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.432743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.432935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.432966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.941 [2024-07-25 05:54:14.433105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.941 [2024-07-25 05:54:14.433135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.941 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.433298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.433330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.433527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.433557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.433714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.433744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.433936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.433963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.434132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.434161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.434327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.434355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.434507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.434534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.434694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.434724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.434855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.434885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.435062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.435089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.435220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.435260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.435434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.435460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.435581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.435610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.435803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.435833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.435963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.435993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.436160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.436187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.436322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.436367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.436541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.436571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.436740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.436766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.436883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.436909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.437087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.437131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.437312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.437340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.437495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.437521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.437692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.437719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.437893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.437920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.438111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.438141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.438269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.438299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.438472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.438500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.438663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.942 [2024-07-25 05:54:14.438690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.942 qpair failed and we were unable to recover it. 00:34:20.942 [2024-07-25 05:54:14.438869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.438898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.439048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.439075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.439255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.439300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.439465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.439495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.439665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.439691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.439844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.439870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.440024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.440069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.440216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.440255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.440406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.440450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.440592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.440621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.440784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.440811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.440965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.440991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.441190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.441279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.441488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.441515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.441657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.441687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.441833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.441862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.442057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.442085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.442221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.442262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.442432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.442462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.442660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.442687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.442861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.442891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.443049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.443079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.443256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.443300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.443449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.443476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.443675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.443705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.443874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.443901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.444077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.444106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.444275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.444305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.444470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.444496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.444677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.444703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.444850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.444879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.445076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.445103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.445253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.445282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.445473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.445503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.445673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.445700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.445834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.445863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.943 [2024-07-25 05:54:14.446029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.943 [2024-07-25 05:54:14.446055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.943 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.446251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.446278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.446400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.446427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.446626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.446660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.446856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.446883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.447081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.447111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.447278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.447308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.447444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.447471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.447618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.447645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.447822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.447852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.448028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.448054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.448215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.448253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.448385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.448416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.448559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.448585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.448777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.448806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.448941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.448971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.449108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.449134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.449300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.449343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.449512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.449541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.449739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.449765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.449953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.449983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.450113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.450143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.450320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.450358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.450483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.450511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.450653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.450683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.450852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.450879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.451039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.451069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.451231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.451269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.451467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.451493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.451630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.451661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.451792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.451822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.451970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.451998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.452189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.452219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.452355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.452385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.452553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.452579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.452776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.452806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.944 qpair failed and we were unable to recover it. 00:34:20.944 [2024-07-25 05:54:14.452970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.944 [2024-07-25 05:54:14.452999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.453135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.453161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.453338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.453382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.453547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.453577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.453772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.453799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.453965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.453995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.454166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.454196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.454349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.454376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.454553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.454601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.454764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.454793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.454971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.454997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.455172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.455199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.455367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.455395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.455570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.455597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.455759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.455788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.455913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.455943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.456120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.456147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.456318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.456348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.456536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.456565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.456706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.456733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.456881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.456925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.457089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.457119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.457322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.457349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.457491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.457521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.457690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.457719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.457917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.457943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.458106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.458135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.458270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.458301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.458477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.458504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.458629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.458656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.458772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.458798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.458928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.458955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.459071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.459114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.459297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.459324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.459472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.459499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.459618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.459649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.459790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.945 [2024-07-25 05:54:14.459820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.945 qpair failed and we were unable to recover it. 00:34:20.945 [2024-07-25 05:54:14.459990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.460017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.460137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.460164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.460301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.460331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.460507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.460533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.460687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.460714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.460887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.460917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.461088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.461115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.461266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.461294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.461435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.461465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.461629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.461655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.461807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.461833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.461979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.462006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.462132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.462159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.462330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.462360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.462529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.462558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.462698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.462725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.462881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.462909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.463098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.463127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.463291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.463319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.463492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.463521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.463683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.463713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.463850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.463878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.464041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.464071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.464277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.464304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.464452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.464479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.464652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.464681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.464922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.464975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.465151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.465178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.465332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.465360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.465539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.465566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.465683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.465710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.465832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.465858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.465978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.466005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.466199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.466229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.466411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.466439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.466641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.466671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.946 [2024-07-25 05:54:14.466833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.946 [2024-07-25 05:54:14.466859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.946 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.467011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.467038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.467229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.467265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.467441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.467472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.467681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.467711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.467880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.467906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.468060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.468086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.468226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.468264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.468434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.468464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.468639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.468666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.468845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.468875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.469032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.469061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.469230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.469266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.469417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.469443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.469634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.469664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.469814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.469841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.470018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.470061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.470278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.470309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.470487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.470514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.470666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.470692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.470833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.470860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.471031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.471058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.471211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.471237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.471407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.471437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.471634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.471660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.471778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.471806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.471993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.472020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.472170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.472198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.947 qpair failed and we were unable to recover it. 00:34:20.947 [2024-07-25 05:54:14.472407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.947 [2024-07-25 05:54:14.472438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.472629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.472660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.472839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.472869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.472982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.473025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.473166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.473196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.473368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.473396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.473570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.473597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.473741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.473782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.473932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.473958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.474093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.474119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.474308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.474336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.474489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.474516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.474689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.474719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.474847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.474877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.475072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.475098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.475266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.475296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.475458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.475488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.475680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.475707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.475875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.475906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.476069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.476099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.476282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.476309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.476433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.476460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.476630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.476659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.476854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.476880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.477027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.477069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.477235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.477273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.477437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.477463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.477669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.477698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.477888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.477914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.478059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.478086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.478235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.478271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.478465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.478495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.478662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.478688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.478858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.478888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.479055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.479087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.479270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.479298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.479437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.479468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.479639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.948 [2024-07-25 05:54:14.479669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.948 qpair failed and we were unable to recover it. 00:34:20.948 [2024-07-25 05:54:14.479837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.479864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.480029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.480058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.480220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.480256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.480457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.480484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.480655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.480685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.480849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.480883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.481057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.481084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.481251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.481281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.481443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.481472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.481607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.481634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.481781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.481825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.482027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.482083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.482250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.482278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.482435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.482465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.482599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.482628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.482818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.482844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.482993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.483021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.483145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.483172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.483359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.483387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.483512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.483556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.483710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.483740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.483912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.483938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.484062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.484089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.484249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.484276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.484429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.484456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.484601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.484627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.484779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.484806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.484988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.485015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.485207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.485236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.485425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.485464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.485643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.485669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1782062 Killed "${NVMF_APP[@]}" "$@" 00:34:20.949 [2024-07-25 05:54:14.485847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.485887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.486037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.486067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.486236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.486271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.486383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.486427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:20.949 [2024-07-25 05:54:14.486597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.486628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 [2024-07-25 05:54:14.486767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.949 [2024-07-25 05:54:14.486795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.949 qpair failed and we were unable to recover it. 00:34:20.949 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:20.949 [2024-07-25 05:54:14.486949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.486977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.487144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:20.950 [2024-07-25 05:54:14.487174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.487323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.487351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:20.950 [2024-07-25 05:54:14.487550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.487580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.950 [2024-07-25 05:54:14.487849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.487901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.488080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.488106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.488278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.488315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.488505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.488534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.488740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.488766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.488920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.488945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.489120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.489146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.489306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.489333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.489498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.489526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.489694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.489722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.489890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.489916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.490072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.490098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.490250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.490295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.490459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.490486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.490651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.490680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.490857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.490886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.491086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.491113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.491277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.491306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.491472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.491502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.491712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.491739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.491916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.491945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 [2024-07-25 05:54:14.492087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.492119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1782611 00:34:20.950 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:20.950 [2024-07-25 05:54:14.492298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.492337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1782611 00:34:20.950 [2024-07-25 05:54:14.492492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.492519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1782611 ']' 00:34:20.950 [2024-07-25 05:54:14.492725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.492755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.950 qpair failed and we were unable to recover it. 00:34:20.950 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.950 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:20.950 [2024-07-25 05:54:14.492930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.950 [2024-07-25 05:54:14.492957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.951 [2024-07-25 05:54:14.493105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.493134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:20.951 [2024-07-25 05:54:14.493287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.493314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.951 [2024-07-25 05:54:14.493468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.493496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.493668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.493699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.493837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.493867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.494049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.494076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.494271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.494314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.494468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.494495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.494659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.494686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.494856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.494886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.495059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.495101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.495368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.495396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.495604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.495638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.495816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.495859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.496071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.496101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.496307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.496335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.496487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.496513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.496701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.496728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.496887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.496914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.497059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.497085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.497283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.497326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.497455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.497481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.497659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.497702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.497874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.497900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.498066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.498095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.498300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.498328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.498458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.498484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.498601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.498627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.498763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.498792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.498990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.499017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.499212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.499250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.951 qpair failed and we were unable to recover it. 00:34:20.951 [2024-07-25 05:54:14.499434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.951 [2024-07-25 05:54:14.499461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.499609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.499635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.499760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.499788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.499941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.499967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.500146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.500176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.500379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.500406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.500574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.500603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.500742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.500768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.500915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.500957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.501105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.501135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.501337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.501364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.501536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.501564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.501732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.501761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.501933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.501959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.502133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.502162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.502369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.502396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.502520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.502546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.502692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.502736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.502910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.502938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.503099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.503128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.503278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.503304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.503427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.503453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.503598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.503629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.503792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.503821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.503990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.504019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.504181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.504210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.504396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.504423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.952 [2024-07-25 05:54:14.504594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.952 [2024-07-25 05:54:14.504623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.952 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.504790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.504816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.504994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.505023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.505179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.505208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.505408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.505434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.505565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.505590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.505718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.505745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.505897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.505924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.506094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.506122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.506283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.506314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.506490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.506516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.506708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.506736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.506903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.506932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.507116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.507146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.507284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.507327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.507475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.507503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.507718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.507744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.507948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.507977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.508172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.508200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.508381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.508407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.508576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.508605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.508769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.508797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.508963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.508993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.509116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.509142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.509286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.509313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.509476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.509502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.509645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.509689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.509844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.509873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.510064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.510094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.510266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.510310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.510429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.510455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.510610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.510636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.510783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.510808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.510956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.510999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.953 qpair failed and we were unable to recover it. 00:34:20.953 [2024-07-25 05:54:14.511189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.953 [2024-07-25 05:54:14.511217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.511397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.511424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.511599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.511629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.511960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.512011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.512182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.512211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.512390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.512418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.512569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.512596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.512771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.512800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.512959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.512988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.513157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.513184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.513327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.513357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.513491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.513520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.513664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.513690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.513838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.513881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.514057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.514086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.514256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.514299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.514456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.514482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.514645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.514674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.514819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.514845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.514992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.515036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.515256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.515300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.515432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.515458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.515625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.515654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.515843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.515872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.516134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.516162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.516335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.516362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.516514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.516557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.516760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.516786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.516991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.517020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.517206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.517238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.517412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.517438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.517567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.517593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.517720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.517746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.517933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.517959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.518130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.518158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.518306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.518333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.518490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.518517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.518680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.518708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.954 [2024-07-25 05:54:14.518880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.954 [2024-07-25 05:54:14.518910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.954 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.519149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.519178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.519344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.519371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.519504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.519547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.519707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.519735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.519932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.519961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.520151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.520179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.520346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.520373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.520522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.520566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.520736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.520767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.520965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.520991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.521154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.521183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.521339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.521366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.521521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.521547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.521745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.521774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.521920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.521949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.522144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.522173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.522346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.522373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.522541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.522574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.522741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.522767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.522935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.522963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.523135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.523163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.523358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.523385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.523535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.523561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.523720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.523746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.523926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.523951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.524125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.524153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.524348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.524375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.524548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.524574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.524740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.524769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.524930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.524959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.525117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.525145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.525306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.525333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.525509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.525535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.525714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.525740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.525894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.955 [2024-07-25 05:54:14.525920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.955 qpair failed and we were unable to recover it. 00:34:20.955 [2024-07-25 05:54:14.526063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.526090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.526259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.526301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.526453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.526479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.526602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.526627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.526783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.526809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.526976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.527005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.527164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.527193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.527370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.527397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.527542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.527570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.527709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.527740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.527915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.527941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.528092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.528122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.528302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.528329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.528454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.528481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.528677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.528707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.528835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.528863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.529056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.529084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.529229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.529262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.529410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.529436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.529601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.529627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.529799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.529828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.530019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.530048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.530182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.530224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.530444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.530476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.530660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.530689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.530860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.530887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.531004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.531045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.531214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.531254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.531393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.531419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.531585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.531610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.531742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.531784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.531928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.531953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.532128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.532157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.532364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.532391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.532569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.532595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.532766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.532794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.532954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.956 [2024-07-25 05:54:14.532982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.956 qpair failed and we were unable to recover it. 00:34:20.956 [2024-07-25 05:54:14.533145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.533174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.533354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.533380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.533511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.533554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.533728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.533754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.533906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.533931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.534133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.534161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.534327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.534354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.534509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.534535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.534710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.534739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.534937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.534962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.535166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.535194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.535416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.535443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.535623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.535648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.535942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.535979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.536156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.536185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.536367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.536394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.536542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.536568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.536686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.536711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.536915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.536943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.537083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.537112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.537267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.537311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.537453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.537479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.537658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.537687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.537842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.537870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.538018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.538045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.538197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.538252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.538446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.538475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.957 qpair failed and we were unable to recover it. 00:34:20.957 [2024-07-25 05:54:14.538649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.957 [2024-07-25 05:54:14.538675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.538878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.538906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.539095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.539124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.539297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.539324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.539474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.539499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.539674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.539704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.539878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.539904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.540076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.540104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.540234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.540291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.540450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.540476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.540621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.540648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.540826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.540852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.541003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.541032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.541219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.541256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.541426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.541452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.541626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.541653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.541834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.541864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.541993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.542022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.542137] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:34:20.958 [2024-07-25 05:54:14.542225] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.958 [2024-07-25 05:54:14.542235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.542287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.542440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.542464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.542634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.542660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.542824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.542850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.543044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.543073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.543208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.543239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.543439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.543465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.543666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.543695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.543870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.543899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.544087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.544116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.544271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.544315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.544460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.544487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.544713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.544739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.544935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.544964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.545140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.545181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.545375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.545402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.545572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.545601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.545761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.545790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.545962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.545991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.546128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.546157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.546310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.546337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.958 qpair failed and we were unable to recover it. 00:34:20.958 [2024-07-25 05:54:14.546482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.958 [2024-07-25 05:54:14.546508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.546690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.546720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.546888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.546914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.547117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.547146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.547323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.547349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.547513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.547539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.547687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.547713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.547883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.547913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.548055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.548084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.548219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.548255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.548454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.548480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.548684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.548713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.548858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.548884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.549029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.549071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.549279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.549310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.549462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.549489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.549661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.549690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.549869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.549897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.550089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.550118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.550303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.550331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.550473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.550513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.550700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.550727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.550860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.550887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.551065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.551092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.551273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.551300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.551501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.551530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.551788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.551841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.551983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.552009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.552166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.552197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.552362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.552389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.552543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.552571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.552719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.552749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.552917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.552945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.553110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.553140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.553342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.553369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.553517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.553543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.553699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.553725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.553898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.553927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.554122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.554151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.554332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.554359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.554519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.554560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.554733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.554764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.959 qpair failed and we were unable to recover it. 00:34:20.959 [2024-07-25 05:54:14.554932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.959 [2024-07-25 05:54:14.554959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.555125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.555154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.555328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.555354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.555501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.555527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.555690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.555718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.555888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.555917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.556106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.556136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.556337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.556365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.556489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.556515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.556670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.556697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.556845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.556887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.557056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.557085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.557327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.557358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.557535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.557561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.557686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.557714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.557894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.557923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.558087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.558116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.558247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.558292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.558443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.558470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.558648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.558678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.558878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.558931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.559076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.559106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.559275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.559317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.559464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.559490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.559698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.559724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.559868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.559897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.560048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.560078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.560252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.560279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.560428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.560454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.560621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.560650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.560821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.560849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.561028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.561055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.561206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.561257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.561465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.561491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.561642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.561675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.561812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.561841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.562167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.562222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.562407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.562434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.562619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.562648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.562826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.562853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.563020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.563050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.563251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.563295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.563487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.563514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.563686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.563716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.960 qpair failed and we were unable to recover it. 00:34:20.960 [2024-07-25 05:54:14.563886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.960 [2024-07-25 05:54:14.563915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.564108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.564137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.564315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.564342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.564497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.564541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.564717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.564743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.564868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.564894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.565069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.565113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.565252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.565279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.565404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.565435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.565611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.565641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.565828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.565857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.566025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.566053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.566209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.566236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.566390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.566416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.566590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.566619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.566790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.566818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.566969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.566996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.567154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.567181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.567367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.567394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.567524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.567551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.567696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.567722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.567841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.567868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.568046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.568073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.568248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.568279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.568470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.568497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.568655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.568683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.568853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.568882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.569046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.569077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.569262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.569306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.569452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.569479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.569656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.569684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.569956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.570018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.570179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.570208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.570386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.570413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.570594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.570621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.570830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.570865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.571021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.571047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.571234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.571294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.571423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.571450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.571642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.571671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.571835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.571861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.572065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.961 [2024-07-25 05:54:14.572094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.961 qpair failed and we were unable to recover it. 00:34:20.961 [2024-07-25 05:54:14.572236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.572271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.572438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.572464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.572621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.572665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.572829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.572858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.573033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.573060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.573261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.573305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.573454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.573480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.573633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.573660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.573843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.573872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.574062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.574091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.574317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.574344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.574495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.574521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.574694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.574723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.574903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.574929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.575082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.575126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.575318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.575348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.575487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.575513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.575717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.575746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.575914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.575942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.576145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.576173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.576346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.576373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.576565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.576594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.576735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.576762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.576955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.576984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 EAL: No free 2048 kB hugepages reported on node 1 00:34:20.962 [2024-07-25 05:54:14.577178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.577208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.577410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.577437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.577585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.577629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.577788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.577818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.577982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.578011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.578168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.578197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.578377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.578404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.578563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.578589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.578758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.578787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.578992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.579018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.579202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.579231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.579383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.579411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.579560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.579602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.579801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.579828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.579981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.580025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.580175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.580200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.580342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.580369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.580513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.580540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.580691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.580719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.962 qpair failed and we were unable to recover it. 00:34:20.962 [2024-07-25 05:54:14.580870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.962 [2024-07-25 05:54:14.580896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.581065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.581091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.581265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.581291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.581416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.581446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.581572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.581598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.581713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.581739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.581870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.581898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.582076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.582102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.582274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.582300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.582444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.582470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.582650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.582675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.582830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.582856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.583004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.583030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.583157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.583183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.583370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.583396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.583540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.583567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.583708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.583734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.583864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.583890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.584011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.584036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.584194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.584219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.584350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.584376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.584559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.584585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.584734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.584760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.584882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.584908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.585033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.585059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.585211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.585254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.585410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.585437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.585588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.585614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.585764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.585791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.585912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.585939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.586101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.586128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.586292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.586319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.586480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.586505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.586664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.586690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.586813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.586839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.586985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.587010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.587162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.587188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.587376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.587402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.587546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.587572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.587751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.587778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.587957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.587984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.588136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.588162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.588310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.588338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.588491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.588521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.588695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.588721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.588840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.588867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.589040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.963 [2024-07-25 05:54:14.589066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.963 qpair failed and we were unable to recover it. 00:34:20.963 [2024-07-25 05:54:14.589212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.589251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.589428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.589454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.589613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.589638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.589759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.589785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.589934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.589961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.590105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.590132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.590291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.590318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.590473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.590499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.590651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.590677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.590837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.590865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.590984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.591010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.591158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.591185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.591372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.591399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.591546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.591573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.591748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.591774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.591897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.591923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.592041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.592068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.592188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.592215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.592385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.592411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.592588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.592614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.592735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.592761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.592937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.592963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.593114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.593141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.593310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.593337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.593460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.593487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.593618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.593644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.593800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.593826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.593947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.593974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.594119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.594146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.594287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.594314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.594492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.594518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.594641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.594667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.594832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.594858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.594983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.595009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.595163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.595189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.595359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.595386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.595535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.595565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.595745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.595771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.595895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.595922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.596103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.596130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.596325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.596362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.596487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.596513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:20.964 [2024-07-25 05:54:14.596676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.964 [2024-07-25 05:54:14.596702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:20.964 qpair failed and we were unable to recover it. 00:34:21.238 [2024-07-25 05:54:14.596835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-07-25 05:54:14.596861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-07-25 05:54:14.597007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-07-25 05:54:14.597033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-07-25 05:54:14.597162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-07-25 05:54:14.597188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-07-25 05:54:14.597334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-07-25 05:54:14.597362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-07-25 05:54:14.597506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-07-25 05:54:14.597542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-07-25 05:54:14.597669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-07-25 05:54:14.597696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-07-25 05:54:14.597842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-07-25 05:54:14.597868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-07-25 05:54:14.598026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-07-25 05:54:14.598052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-07-25 05:54:14.598195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.238 [2024-07-25 05:54:14.598222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.238 qpair failed and we were unable to recover it. 00:34:21.238 [2024-07-25 05:54:14.598355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.598381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.598531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.598558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.598683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.598709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.598830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.598856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.598980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.599006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.599160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.599187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.599318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.599344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.599463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.599490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.599644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.599671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.599822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.599848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.599975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.600002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.600124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.600152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.600321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.600362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.600520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.600547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.600675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.600703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.600854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.600881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.601028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.601055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.601187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.601215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.601344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.601372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.601524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.601551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.601678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.601705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.601864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.601891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.602037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.602063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.602248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.602275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.602402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.602435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.602597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.602624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.602770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.602796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.602976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.603002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.603130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.603156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.603287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.603316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.603471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.603499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.603631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.603658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.603780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.603807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.603957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.603983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.604106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.604134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.604311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.604339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.604462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.604489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.604620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.604647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.239 [2024-07-25 05:54:14.604823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.239 [2024-07-25 05:54:14.604849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.239 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.605003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.605029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.605189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.605215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.605369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.605395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.605545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.605571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.605723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.605749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.605907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.605933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.606084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.606111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.606262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.606289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.606412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.606439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.606569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.606596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.606745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.606772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.606929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.606956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.607115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.607142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.607317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.607344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.607488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.607514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.607637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.607665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.607818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.607846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.607991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.608031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.608196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.608224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.608392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.608419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.608545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.608572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.608698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.608724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.608843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.608871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.608994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.609021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.609170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.609197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.609323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.609350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.609469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.609496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.609650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.609676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.609788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.609814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.609976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.610004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.610129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.610155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.610325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.610352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.610477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.610504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.610658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.610685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.610831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.610857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.610946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:21.240 [2024-07-25 05:54:14.611018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.611043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.611162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.611187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.611345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.611372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.611503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.611529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.240 qpair failed and we were unable to recover it. 00:34:21.240 [2024-07-25 05:54:14.611681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.240 [2024-07-25 05:54:14.611708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.611827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.611853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.612002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.612027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.612170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.612197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.612350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.612377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.612531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.612557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.612731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.612756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.612934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.612959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.613103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.613129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.613286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.613313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.613463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.613488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.613641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.613669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.613799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.613825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.613945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.613977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.614108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.614135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.614291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.614318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.614445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.614471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.614648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.614675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.614864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.614891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.615102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.615128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.615282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.615309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.615519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.615545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.615755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.615781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.615955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.615982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.616161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.616186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.616333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.616359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.616491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.616517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.616671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.616697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.616847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.616874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.617005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.617030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.617150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.617176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.617341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.617368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.617497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.617523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.617701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.617727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.617879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.617905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.618053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.618079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.618262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.618289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.618417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.618442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.618619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.618645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.618833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.618859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.619004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.619029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.619182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.619208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.241 qpair failed and we were unable to recover it. 00:34:21.241 [2024-07-25 05:54:14.619364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.241 [2024-07-25 05:54:14.619390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.619528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.619554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.619714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.619740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.619895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.619921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.620100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.620126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.620284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.620313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.620472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.620498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.620631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.620658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.620816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.620842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.620995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.621021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.621203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.621230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.621412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.621438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.621586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.621616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.621761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.621788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.621932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.621958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.622073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.622099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.622274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.622302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.622452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.622478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.622611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.622637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.622793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.622820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.622982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.623008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.623153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.623179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.623306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.623333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.623487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.623513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.623658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.623684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.623838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.623864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.624045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.624071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.624251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.624278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.624408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.624434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.624558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.624584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.624721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.624747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.624879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.624905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.625052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.625078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.625202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.625229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.625357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.625385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.625540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.625568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.625717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.625744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.626044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.626075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.626236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.626267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.626396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.626426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.626580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.242 [2024-07-25 05:54:14.626609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.242 qpair failed and we were unable to recover it. 00:34:21.242 [2024-07-25 05:54:14.626760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.626786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.626935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.626962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.627116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.627142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.627266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.627294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.627472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.627499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.627655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.627682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.627806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.627836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.627995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.628023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.628175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.628202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.628370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.628399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.628580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.628606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.628726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.628753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.628937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.628964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.629141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.629168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.629325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.629353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.629500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.629527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.629671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.629698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.629833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.629860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.630011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.630037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.630184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.630212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.630364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.630391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.630539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.630566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.630709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.630736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.630871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.630899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.631043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.631069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.631253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.631282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.631443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.631471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.631624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.631651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.631811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.631837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.631971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.631998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.632122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.632149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.632281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.632309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.632472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.632498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.632624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.632651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.632807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.632834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.633014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.633041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.633188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.633215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.633356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.633385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.243 [2024-07-25 05:54:14.633561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.243 [2024-07-25 05:54:14.633589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.243 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.633703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.633735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.633864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.633892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.634047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.634074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.634221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.634254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.634382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.634410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.634560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.634587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.634730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.634757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.634898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.634926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.635105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.635132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.635277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.635305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.635451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.635479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.635601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.635628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.635777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.635805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.635956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.635982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.636107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.636134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.636290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.636317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.636470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.636496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.636651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.636678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.636808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.636834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.636982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.637008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.637127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.637154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.637304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.637332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.637449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.637475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.637627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.637655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.637779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.637805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.637933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.637960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.638117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.638143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.638308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.638338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.638463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.638490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.638638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.638665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.638837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.638863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.639012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.639039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.639190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.639217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.639355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.639383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.639534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.639561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.639678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.639705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.639856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.639883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.640035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.640061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.640188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.640217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.640403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.640446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.640608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.640636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.640818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.640846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.641001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.641029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.641181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.641209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.641379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.641407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.244 qpair failed and we were unable to recover it. 00:34:21.244 [2024-07-25 05:54:14.641591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.244 [2024-07-25 05:54:14.641618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.641802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.641830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.641981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.642008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.642166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.642195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.642355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.642382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.642513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.642541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.642720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.642747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.642896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.642923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.643079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.643106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.643269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.643298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.643456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.643485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.643640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.643669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.643794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.643822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.643979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.644007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.644184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.644212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.644383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.644410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.644563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.644591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.644723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.644751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.644908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.644936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.645088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.645116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.645247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.645275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.645399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.645426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.645575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.645602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.645731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.645760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.645928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.645955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.646128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.646155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.646278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.646305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.646481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.646508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.646630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.646658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.646837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.646864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.647016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.647043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.647168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.647197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.647357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.647385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.647514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.647542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.647700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.647727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.647855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.647883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.648004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.648031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.648190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.648218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.648352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.648379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.648506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.648533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.648710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.648737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.648861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.648888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.649067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.649093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.649210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.649237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.649374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.649402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.649562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.649589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.649738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.245 [2024-07-25 05:54:14.649766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.245 qpair failed and we were unable to recover it. 00:34:21.245 [2024-07-25 05:54:14.649947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.649974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.650105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.650132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.650270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.650298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.650459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.650488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.650606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.650633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.650786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.650818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.650961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.650989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.651134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.651161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.651315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.651343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.651474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.651502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.651626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.651654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.651803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.651830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.652012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.652038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.652185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.652212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.652410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.652438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.652597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.652634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.652790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.652817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.652943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.652970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.653098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.653124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.653270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.653297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.653447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.653475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.653628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.653655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.653809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.653836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.653991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.654018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.654164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.654190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.654353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.654380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.654504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.654530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.654650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.654677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.654824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.654851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.655000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.655027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.655158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.655189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.655337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.655365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.655490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.655516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.655631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.655657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.655778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.655805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.655955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.655982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.656134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.656160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.656289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.656315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.656473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.656499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.656644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.246 [2024-07-25 05:54:14.656671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.246 qpair failed and we were unable to recover it. 00:34:21.246 [2024-07-25 05:54:14.656817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.656843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.656994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.657020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.657167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.657194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.657371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.657398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.657550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.657576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.657709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.657735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.657879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.657905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.658052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.658078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.658206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.658232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.658439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.658465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.658611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.658637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.658853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.658879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.659070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.659096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.659276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.659303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.659455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.659481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.659628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.659654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.659781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.659809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.659981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.660008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.660161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.660187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.660318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.660346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.660501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.660527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.660679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.660705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.660841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.660867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.661022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.661048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.661203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.661230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.661407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.661434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.661612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.661639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.661752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.661778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.661908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.661934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.662085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.662111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.662290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.662316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.662499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.662529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.662680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.662707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.662834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.662860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.662981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.663008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.663160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.663187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.663344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.663371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.663520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.663546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.663698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.663725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.663875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.663900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.664023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.664050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.664203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.664229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.664384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.664411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.664565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.664591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.664711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.664737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.247 qpair failed and we were unable to recover it. 00:34:21.247 [2024-07-25 05:54:14.664888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.247 [2024-07-25 05:54:14.664914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.665090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.665116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.665234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.665267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.665424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.665450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.665626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.665653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.665804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.665830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.665981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.666006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.666153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.666180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.666357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.666383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.666524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.666549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.666747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.666773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.666927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.666954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.667106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.667131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.667291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.667321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.667472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.667499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.667675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.667701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.667829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.667856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.667978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.668005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.668132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.668159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.668312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.668355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.668518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.668545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.668708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.668736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.668888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.668915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.669070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.669097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.669228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.669264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.669422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.669449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.669574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.669601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.669739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.669766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.669898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.669929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.670082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.670110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.670233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.670266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.670443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.670470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.670593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.670619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.670798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.670824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.671000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.671026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.671179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.671206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.671374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.671402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.671564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.671591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.671722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.671748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.671862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.671888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.672035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.672070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.672228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.672264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.672413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.672441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.672593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.672620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.672797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.672823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.672941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.248 [2024-07-25 05:54:14.672969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.248 qpair failed and we were unable to recover it. 00:34:21.248 [2024-07-25 05:54:14.673116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.673143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.673294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.673321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.673499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.673526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.673681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.673708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.673860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.673886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.674026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.674053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.674210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.674238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.674395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.674422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.674620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.674664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.674821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.674849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.675017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.675045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.675221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.675256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.675396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.675423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.675547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.675574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.675744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.675771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.675911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.675938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.676074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.676102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.676285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.676313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.676442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.676469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.676653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.676681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.676852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.676880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.677028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.677060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.677251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.677278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.677435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.677462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.677616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.677643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.677820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.677847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.677981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.678009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.678164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.678192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.678346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.678375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.678534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.678560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.678740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.678768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.678896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.678928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.679086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.679114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.679313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.679354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.679498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.679528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.679685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.679713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.679843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.679871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.680031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.680058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.680208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.680236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.680399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.680427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.680604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.680632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.680781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.680807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.680934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.680961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.681120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.681148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.681273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.249 [2024-07-25 05:54:14.681300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.249 qpair failed and we were unable to recover it. 00:34:21.249 [2024-07-25 05:54:14.681425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.681451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.681578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.681606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.681756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.681783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.681915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.681943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.682074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.682103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.682258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.682287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.682417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.682446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.682606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.682633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.682816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.682841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.683020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.683046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.683168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.683194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.683345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.683372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.683531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.683557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.683687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.683715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.683869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.683896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.684050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.684077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.684258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.684291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.684419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.684446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb38000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.684586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.684627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.684792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.684822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.684951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.684979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.685138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.685165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.685297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.685325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.685478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.685505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.685634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.685662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.685816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.685845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.685990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.686018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.686166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.686192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.686329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.686358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.686504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.686530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.686669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.686696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.686863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.686890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.687086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.687112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.687271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.687298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.687423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.687451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.687605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.687632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.687813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.687839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.250 [2024-07-25 05:54:14.687995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.250 [2024-07-25 05:54:14.688022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.250 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.688145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.688171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.688326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.688354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.688478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.688505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.688655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.688682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.688830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.688857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.689014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.689040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.689218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.689250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.689405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.689433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.689577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.689604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.689736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.689763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.689889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.689916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.690074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.690100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.690258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.690286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.690441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.690469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.690619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.690646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.690796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.690823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.690982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.691009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.691163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.691191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.691386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.691432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.691588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.691616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.691801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.691829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.691961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.691990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.692127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.692155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.692311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.692338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.692503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.692531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.692693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.692722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.692846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.692874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.693024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.693051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.693229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.693263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.693443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.693470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.693592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.693619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.693772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.693799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.693957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.693983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.694135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.694162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.694309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.694337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.694463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.694489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.694649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.694677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.694802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.694828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.694961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.694987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.695142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.695168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.695287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.695315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.695496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.695523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.695655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.695682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.695830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.695857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.696010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.696036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.251 qpair failed and we were unable to recover it. 00:34:21.251 [2024-07-25 05:54:14.696162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.251 [2024-07-25 05:54:14.696188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.696351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.696379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.696508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.696534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.696687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.696713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.696889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.696915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.697057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.697083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.697231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.697262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.697390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.697415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.697543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.697570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.697715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.697740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.697915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.697941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.698092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.698119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.698262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.698288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.698443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.698470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.698644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.698675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.698834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.698860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.698989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.699013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.699142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.699168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.699321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.699347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.699502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.699528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.699657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.699683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.699861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.699888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.700013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.700040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.700165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.700191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.700344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.700385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.700511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.700538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.700664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.700691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.700815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.700842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.700964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.700991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.701107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.701133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.701263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.701290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.701389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:21.252 [2024-07-25 05:54:14.701409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.701424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:21.252 [2024-07-25 05:54:14.701434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 [2024-07-25 05:54:14.701440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.701453] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:21.252 [2024-07-25 05:54:14.701463] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:21.252 [2024-07-25 05:54:14.701551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.701576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.701549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:21.252 [2024-07-25 05:54:14.701598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:21.252 [2024-07-25 05:54:14.701701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.701726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 [2024-07-25 05:54:14.701624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.701626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:21.252 [2024-07-25 05:54:14.701872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.701898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.702027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.702052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.702187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.702214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.702347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.702374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.702522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.702564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.702704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.702732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.702867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.702894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.703016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.703043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.252 [2024-07-25 05:54:14.703161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.252 [2024-07-25 05:54:14.703187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.252 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.703307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.703335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.703493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.703522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.703675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.703701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.703825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.703851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.703981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.704007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.704129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.704156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.704317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.704344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.704464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.704490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.704636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.704662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.704780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.704806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.704930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.704957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.705110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.705138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.705267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.705293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.705444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.705470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.705586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.705612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.705726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.705752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.705899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.705925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.706089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.706130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.706257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.706286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.706438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.706466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.706659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.706686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.706870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.706897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.707031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.707061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.707181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.707208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.707345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.707371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.707519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.707545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.707671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.707699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.707828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.707855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.707968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.707994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.708115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.708144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.708295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.708322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.708453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.708480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.708604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.708631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.708810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.708836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.708967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.708994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.709124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.709151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.709294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.709321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.709474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.709501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.709647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.709673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.709822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.709848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.709971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.709997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.710113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.710139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.710293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.710321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.710444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.710471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.710626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.710654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.710777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.253 [2024-07-25 05:54:14.710805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.253 qpair failed and we were unable to recover it. 00:34:21.253 [2024-07-25 05:54:14.710921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.710948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.711098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.711138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.711267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.711295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.711435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.711475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.711641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.711669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.711802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.711829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.712045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.712071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.712226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.712259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.712415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.712441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.712561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.712588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.712708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.712735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.712891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.712917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.713037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.713063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.713205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.713251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.713424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.713452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.713579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.713607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.713727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.713753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.713912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.713939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.714056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.714083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.714219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.714253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.714411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.714438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.714586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.714612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.714738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.714764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.714914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.714940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.715175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.715202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.715349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.715376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.715493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.715519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.715636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.715662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.715820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.715846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.715990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.716017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.716129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.716160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.716292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.716320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.716482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.716508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.716736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.716762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.716912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.716938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.717074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.717100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.717230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.717266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.717395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.717421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.717578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.717604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.254 qpair failed and we were unable to recover it. 00:34:21.254 [2024-07-25 05:54:14.717748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.254 [2024-07-25 05:54:14.717774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.717904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.717930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.718052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.718078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.718246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.718274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.718397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.718423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.718605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.718630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.718752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.718778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.718931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.718957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.719105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.719131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.719253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.719279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.719437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.719463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.719585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.719611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.719728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.719754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.719872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.719897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.720055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.720081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.720197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.720223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.720362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.720388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.720506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.720532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.720649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.720679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.720795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.720820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.720935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.720961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.721096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.721122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.721253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.721279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.721404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.721429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.721574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.721600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.721715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.721741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.721900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.721926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.722050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.722076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.722231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.722264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.722386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.722412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.722538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.722565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.722696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.722722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.722865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.722891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.723034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.723060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.723215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.723248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.723376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.723402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.723597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.723623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.723749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.723775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.723932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.723958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.724113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.724139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.724279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.724306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.724466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.724492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.724647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.724672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.724786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.724812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.724966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.724993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.725142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.725168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.255 [2024-07-25 05:54:14.725326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.255 [2024-07-25 05:54:14.725369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.255 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.725506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.725535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.725684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.725712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.725838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.725866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.726013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.726040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.726166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.726193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.726334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.726363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.726518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.726545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.726719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.726746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.726881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.726909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.727037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.727063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.727181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.727208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.727336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.727364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.727483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.727510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.727622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.727649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.727779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.727805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.727954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.727980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.728133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.728161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.728304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.728345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.728508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.728537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.728685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.728713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.728848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.728876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.729004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.729032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.729181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.729209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.729361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.729389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.729519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.729546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.729675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.729702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.729831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.729858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.730004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.730031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.730149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.730177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.730342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.730382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.730512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.730541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.730696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.730724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.730871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.730899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.731025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.731054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.731179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.731207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.731377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.731405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.731523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.731550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.731725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.731751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.731879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.731905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.732044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.732071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.732200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.732227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.732435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.732462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.732632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.732659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.732822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.732850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.732999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.733026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.256 [2024-07-25 05:54:14.733182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.256 [2024-07-25 05:54:14.733208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.256 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.733354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.733381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.733517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.733544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.733701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.733728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.733854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.733880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.734012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.734039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.734213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.734240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.734393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.734420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.734553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.734579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.734728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.734755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.734873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.734900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.735051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.735078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.735254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.735282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.735417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.735444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.735605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.735631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.735756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.735782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.735910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.735937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.736066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.736092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.736252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.736279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.736404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.736431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.736561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.736587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.736750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.736781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.736937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.736964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.737080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.737106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.737253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.737280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.737438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.737464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.737613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.737640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.737758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.737785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.737910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.737937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.738053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.738080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.738207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.738234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.738389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.738416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.738537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.738564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.738718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.738746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.738873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.738900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.739056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.739083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.739200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.739227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.739364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.739392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.739518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.739545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.739660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.739687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.739840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.739867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.740024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.740051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.740170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.740197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.740328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.740355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.740505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.740531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.740657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.740683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.740812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.257 [2024-07-25 05:54:14.740839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.257 qpair failed and we were unable to recover it. 00:34:21.257 [2024-07-25 05:54:14.740967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.740993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.741119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.741146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.741272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.741300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.741424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.741451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.741569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.741595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.741717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.741744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.741867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.741895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.742043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.742071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.742225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.742268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.742386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.742413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.742568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.742595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.742722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.742749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.742876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.742903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.743027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.743053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.743169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.743195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.743379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.743423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.743559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.743589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.743721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.743749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.743898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.743926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.744082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.744110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.744225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.744259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.744413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.744441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.744593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.744621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.744740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.744767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb30000b90 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.744897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.744926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.745071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.745098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.745253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.745281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.745433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.745460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.745619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.745646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.745785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.745812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.745929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.745956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.746080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.746107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.746255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.746282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.746433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.746459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.746592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.746619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.746770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.746797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.746920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.746948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.747069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.747096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.747223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.747264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.258 [2024-07-25 05:54:14.747427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.258 [2024-07-25 05:54:14.747454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.258 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.747574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.747600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.747732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.747759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.747876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.747907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.748036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.748063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.748209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.748235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.748372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.748398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.748544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.748571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.748729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.748756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.748889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.748916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.749042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.749069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.749224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.749258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.749416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.749443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.749590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.749617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.749749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.749776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.749901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.749927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.750044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.750071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.750193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.750220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.750377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.750404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.750534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.750560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.750744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.750770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.750921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.750948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.751074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.751100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.751240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.751308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.751454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.751480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.751628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.751653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.751810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.751835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.751966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.751992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.752148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.752174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.752290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.752317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.752467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.752499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.752656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.752682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.752805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.752831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.752949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.752976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.753118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.753144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.753263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.753290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.753440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.753466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.753621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.753647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.753762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.753788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.753915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.753942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.754067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.754093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.754207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.754232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.754366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.754393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.754516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.754543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.754698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.754725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.754878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.754904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.755051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.259 [2024-07-25 05:54:14.755077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.259 qpair failed and we were unable to recover it. 00:34:21.259 [2024-07-25 05:54:14.755255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.755283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.755407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.755434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.755584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.755609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.755741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.755768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.755920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.755947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.756068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.756094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.756256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.756287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.756423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.756451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.756611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.756637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.756754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.756781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.756934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.756960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.757087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.757115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.757290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.757317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.757436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.757461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.757582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.757608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.757730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.757757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.757872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.757898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.758052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.758078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.758235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.758276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.758408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.758434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.758588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.758615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.758740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.758767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.758885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.758910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.759029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.759056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.759206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.759237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.759363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.759390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.759515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.759542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.759700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.759727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.759885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.759912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.760047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.760073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.760205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.760232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.760413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.760440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.760564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.760590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.760741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.760768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.760893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.760920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.761045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.761070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.761345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.761373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.761522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.761548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.761676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.761703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.761817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.761843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.761970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.761996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.762146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.762172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.762320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.762347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.762499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.762526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.762670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.762696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.762840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.762866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.260 qpair failed and we were unable to recover it. 00:34:21.260 [2024-07-25 05:54:14.762988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.260 [2024-07-25 05:54:14.763014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.763141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.763167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.763287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.763314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.763467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.763493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.763652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.763677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.763821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.763855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.764001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.764027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.764155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.764181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.764297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.764324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.764445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.764471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.764594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.764621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.764817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.764844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.765068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.765095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.765247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.765274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.765429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.765454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.765582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.765609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.765768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.765793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.765937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.765963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.766110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.766137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.766261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.766288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.766460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.766485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.766618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.766647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.766801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.766829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.767003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.767029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.767182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.767209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.767366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.767394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.767547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.767573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.767706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.767733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.767850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.767877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.768056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.768082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.768252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.768279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.768411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.768439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.768589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.768616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.768752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.768779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.768941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.768968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.769089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.769116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.769255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.769282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.769435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.769461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.769653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.769680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.769808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.769835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.769958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.769985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.770160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.770187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.770339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.770384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.770549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.770579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.770699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.770728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.770883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.770911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.261 qpair failed and we were unable to recover it. 00:34:21.261 [2024-07-25 05:54:14.771087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.261 [2024-07-25 05:54:14.771118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.771277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.771304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.771425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.771451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.771616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.771642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.771792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.771818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.771998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.772025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.772150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.772177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.772319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.772347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.772492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.772519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.772634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.772661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.772810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.772837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.772964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.772992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.773146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.773172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.773303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.773334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.773458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.773484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.773600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.773627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.773793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.773835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.773966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.773996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.774152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.774180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.774383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.774412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.774594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.774622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.774774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.774802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.774927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.774954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.775109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.775136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.775292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.775321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.775456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.775482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.775627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.775652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.775797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.775828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.775954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.775982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.776098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.776126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.776248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.776276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.776453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.776481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.776614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.776642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.776795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.776821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.776962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.776990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.777125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.777154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.777311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.777341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.777468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.262 [2024-07-25 05:54:14.777495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.262 qpair failed and we were unable to recover it. 00:34:21.262 [2024-07-25 05:54:14.777625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.777653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.777813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.777841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.778001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.778030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.778160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.778188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.778332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.778359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.778511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.778537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.778698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.778727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.778882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.778909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.779027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.779054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.779185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.779212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.779374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.779402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.779562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.779589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.779720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.779746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.779904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.779931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.780066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.780095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.780258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.780286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.780433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.780466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.780616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.780644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.780766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.780794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.780949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.780975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.781127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.781154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.781275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.781302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.781434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.781463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.781596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.781624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.781745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.781772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.781921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.781949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.782100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.782127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.782260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.782288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.782414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.782440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.782559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.782585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.782769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.782797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.782931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.782957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.783106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.783134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.783259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.783286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.783407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.783434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.783556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.783584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.783734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.783762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.783889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.783916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.784091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.784120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.784274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.784302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.784440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.784467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.784581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.784608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.784734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.784761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.784896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.784924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.785044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.785072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.785191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.785217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.263 qpair failed and we were unable to recover it. 00:34:21.263 [2024-07-25 05:54:14.785374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.263 [2024-07-25 05:54:14.785402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.785524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.785551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.785673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.785701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.785828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.785856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.786006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.786035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.786180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.786207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.786358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.786384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.786501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.786528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.786674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.786702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.786826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.786854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.786972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.786999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.787152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.787179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.787306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.787334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.787494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.787521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.787710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.787737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.787885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.787911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.788039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.788068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.788189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.788218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.788376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.788405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.788564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.788591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.788744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.788772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.788892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.788919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.789071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.789099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.789226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.789258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.789407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.789435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.789613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.789640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.789771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.789798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.789922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.789950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.790112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.790139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.790262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.790290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.790422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.790448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.790574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.790601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.790721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.790747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.790864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.790891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.791022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.791050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.791205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.791232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.791365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.791392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.791541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.791568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.791700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.791728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.791848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.791874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.792055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.792085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.792217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.792249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.792392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.792419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.792561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.792589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb40000b90 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.792744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.792771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.792895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.792922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.264 [2024-07-25 05:54:14.793040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.264 [2024-07-25 05:54:14.793067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.264 qpair failed and we were unable to recover it. 00:34:21.265 [2024-07-25 05:54:14.793188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.265 [2024-07-25 05:54:14.793215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.265 qpair failed and we were unable to recover it. 00:34:21.265 [2024-07-25 05:54:14.793366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.265 [2024-07-25 05:54:14.793394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.265 qpair failed and we were unable to recover it. 00:34:21.265 [2024-07-25 05:54:14.793511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.265 [2024-07-25 05:54:14.793538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.265 qpair failed and we were unable to recover it. 00:34:21.265 [2024-07-25 05:54:14.793648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.265 [2024-07-25 05:54:14.793674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef600 with addr=10.0.0.2, port=4420 00:34:21.265 qpair failed and we were unable to recover it. 00:34:21.265 A controller has encountered a failure and is being reset. 00:34:21.265 [2024-07-25 05:54:14.793878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.265 [2024-07-25 05:54:14.793914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5fd620 with addr=10.0.0.2, port=4420 00:34:21.265 [2024-07-25 05:54:14.793933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fd620 is same with the state(5) to be set 00:34:21.265 [2024-07-25 05:54:14.793963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5fd620 (9): Bad file descriptor 00:34:21.265 [2024-07-25 05:54:14.793985] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.265 [2024-07-25 05:54:14.794001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.265 [2024-07-25 05:54:14.794020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.265 Unable to reset the controller. 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.265 Malloc0 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.265 [2024-07-25 05:54:14.892752] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.265 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.265 [2024-07-25 05:54:14.921034] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.523 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.523 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:21.523 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.523 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.523 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.523 05:54:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1782084 00:34:22.456 Controller properly reset. 00:34:27.716 Initializing NVMe Controllers 00:34:27.716 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:27.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:27.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:27.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:27.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:27.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:27.716 Initialization complete. Launching workers. 00:34:27.716 Starting thread on core 1 00:34:27.716 Starting thread on core 2 00:34:27.716 Starting thread on core 3 00:34:27.716 Starting thread on core 0 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:27.716 00:34:27.716 real 0m10.775s 00:34:27.716 user 0m32.422s 00:34:27.716 sys 0m8.292s 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:27.716 ************************************ 00:34:27.716 END TEST nvmf_target_disconnect_tc2 00:34:27.716 ************************************ 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:27.716 rmmod nvme_tcp 00:34:27.716 rmmod nvme_fabrics 00:34:27.716 rmmod nvme_keyring 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1782611 ']' 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1782611 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1782611 ']' 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1782611 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1782611 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:34:27.716 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:34:27.717 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1782611' 00:34:27.717 killing process with pid 1782611 00:34:27.717 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1782611 00:34:27.717 05:54:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1782611 00:34:27.717 05:54:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:27.717 05:54:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:27.717 05:54:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:27.717 05:54:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:27.717 05:54:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:27.717 05:54:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.717 05:54:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.717 05:54:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.616 05:54:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:29.616 00:34:29.616 real 0m15.319s 00:34:29.616 user 0m58.112s 00:34:29.616 sys 0m10.573s 00:34:29.616 05:54:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:29.616 05:54:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:29.616 ************************************ 00:34:29.616 END TEST nvmf_target_disconnect 00:34:29.616 ************************************ 00:34:29.616 05:54:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:29.616 00:34:29.616 real 6m31.422s 00:34:29.616 user 16m55.210s 00:34:29.616 sys 1m27.947s 00:34:29.616 05:54:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:29.616 05:54:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.616 ************************************ 00:34:29.616 END TEST nvmf_host 00:34:29.616 ************************************ 00:34:29.616 00:34:29.616 real 27m6.269s 00:34:29.616 user 73m57.293s 00:34:29.616 sys 6m27.119s 00:34:29.616 05:54:23 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:29.616 05:54:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.616 ************************************ 00:34:29.616 END TEST nvmf_tcp 00:34:29.616 ************************************ 00:34:29.616 05:54:23 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:34:29.616 05:54:23 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:29.616 05:54:23 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:29.616 05:54:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:29.616 05:54:23 -- common/autotest_common.sh@10 -- # set +x 00:34:29.873 ************************************ 00:34:29.873 START TEST spdkcli_nvmf_tcp 00:34:29.873 ************************************ 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:29.873 * Looking for test storage... 00:34:29.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1783771 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1783771 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1783771 ']' 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:29.873 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.873 [2024-07-25 05:54:23.438609] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:34:29.873 [2024-07-25 05:54:23.438707] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783771 ] 00:34:29.873 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.873 [2024-07-25 05:54:23.497364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:30.130 [2024-07-25 05:54:23.583651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.130 [2024-07-25 05:54:23.583656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.130 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:30.130 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:34:30.131 05:54:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:30.131 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:30.131 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:30.131 05:54:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:30.131 05:54:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:30.131 05:54:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:30.131 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:30.131 05:54:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:30.131 05:54:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:30.131 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:30.131 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:30.131 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:30.131 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:30.131 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:30.131 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:30.131 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:30.131 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:30.131 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:30.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:30.131 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:30.131 ' 00:34:32.656 [2024-07-25 05:54:26.225106] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.028 [2024-07-25 05:54:27.465518] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:36.556 [2024-07-25 05:54:29.736657] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:38.454 [2024-07-25 05:54:31.678676] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:39.826 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:39.826 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:39.826 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:39.826 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:39.826 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:39.826 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:39.826 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:39.826 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:39.826 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:39.826 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:39.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:39.826 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:39.826 05:54:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:39.826 05:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:39.826 05:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.826 05:54:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:39.826 05:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:39.826 05:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.826 05:54:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:39.826 05:54:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:40.082 05:54:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:40.082 05:54:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:40.082 05:54:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:40.082 05:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:40.082 05:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:40.082 05:54:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:40.082 05:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:40.082 05:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:40.082 05:54:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:40.082 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:40.082 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:40.082 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:40.082 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:40.082 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:40.082 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:40.082 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:40.082 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:40.082 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:40.082 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:40.082 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:40.082 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:40.082 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:40.082 ' 00:34:45.339 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:45.339 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:45.339 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:45.339 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:45.339 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:45.339 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:45.339 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:45.339 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:45.339 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:45.339 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:45.339 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:45.339 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:45.339 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:45.339 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:45.339 05:54:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:45.339 05:54:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:45.339 05:54:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.339 05:54:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1783771 00:34:45.339 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1783771 ']' 00:34:45.339 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1783771 00:34:45.339 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:34:45.339 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:45.339 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1783771 00:34:45.339 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:45.339 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:45.339 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1783771' 00:34:45.339 killing process with pid 1783771 00:34:45.339 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1783771 00:34:45.339 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1783771 00:34:45.597 05:54:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:45.597 05:54:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:45.597 05:54:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1783771 ']' 00:34:45.597 05:54:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1783771 00:34:45.597 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1783771 ']' 00:34:45.597 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1783771 00:34:45.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1783771) - No such process 00:34:45.597 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1783771 is not found' 00:34:45.597 Process with pid 1783771 is not found 00:34:45.597 05:54:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:45.597 05:54:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:45.597 05:54:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:45.597 00:34:45.597 real 0m15.918s 00:34:45.597 user 0m33.655s 00:34:45.597 sys 0m0.811s 00:34:45.597 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:45.597 05:54:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.597 ************************************ 00:34:45.597 END TEST spdkcli_nvmf_tcp 00:34:45.597 ************************************ 00:34:45.597 05:54:39 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:45.597 05:54:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:45.597 05:54:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:45.597 05:54:39 -- common/autotest_common.sh@10 -- # set +x 00:34:45.597 ************************************ 00:34:45.597 START TEST nvmf_identify_passthru 00:34:45.597 ************************************ 00:34:45.597 05:54:39 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:45.855 * Looking for test storage... 00:34:45.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:45.855 05:54:39 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.855 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:45.855 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.855 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.855 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.855 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.855 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.855 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.855 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.855 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.855 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.855 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.855 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.855 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.856 05:54:39 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.856 05:54:39 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.856 05:54:39 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.856 05:54:39 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.856 05:54:39 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.856 05:54:39 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.856 05:54:39 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:45.856 05:54:39 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:45.856 05:54:39 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.856 05:54:39 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.856 05:54:39 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.856 05:54:39 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.856 05:54:39 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.856 05:54:39 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.856 05:54:39 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.856 05:54:39 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:45.856 05:54:39 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.856 05:54:39 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.856 05:54:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:45.856 05:54:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:45.856 05:54:39 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:45.856 05:54:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:47.754 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:47.754 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:47.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.754 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:47.755 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:47.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:34:47.755 00:34:47.755 --- 10.0.0.2 ping statistics --- 00:34:47.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.755 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:34:47.755 00:34:47.755 --- 10.0.0.1 ping statistics --- 00:34:47.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.755 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:47.755 05:54:41 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:48.013 05:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:48.013 05:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:34:48.013 05:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:34:48.013 05:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:48.013 05:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:48.013 05:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:48.013 05:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:48.013 05:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:48.013 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.193 05:54:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:52.193 05:54:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:52.193 05:54:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:52.193 05:54:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:52.193 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.457 05:54:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:56.457 05:54:50 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:56.457 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:56.457 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.457 05:54:50 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:56.457 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:56.457 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.457 05:54:50 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1788305 00:34:56.457 05:54:50 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:56.457 05:54:50 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:56.457 05:54:50 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1788305 00:34:56.457 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1788305 ']' 00:34:56.457 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.457 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:56.457 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.457 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:56.457 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.457 [2024-07-25 05:54:50.113378] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:34:56.457 [2024-07-25 05:54:50.113476] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.715 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.715 [2024-07-25 05:54:50.179813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:56.715 [2024-07-25 05:54:50.265952] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.715 [2024-07-25 05:54:50.266017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.715 [2024-07-25 05:54:50.266039] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.715 [2024-07-25 05:54:50.266050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.715 [2024-07-25 05:54:50.266059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.715 [2024-07-25 05:54:50.266138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.715 [2024-07-25 05:54:50.266163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.715 [2024-07-25 05:54:50.266221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:56.715 [2024-07-25 05:54:50.266224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.715 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:56.715 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:34:56.715 05:54:50 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:56.715 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.715 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.715 INFO: Log level set to 20 00:34:56.715 INFO: Requests: 00:34:56.715 { 00:34:56.715 "jsonrpc": "2.0", 00:34:56.715 "method": "nvmf_set_config", 00:34:56.715 "id": 1, 00:34:56.715 "params": { 00:34:56.715 "admin_cmd_passthru": { 00:34:56.715 "identify_ctrlr": true 00:34:56.715 } 00:34:56.715 } 00:34:56.715 } 00:34:56.715 00:34:56.715 INFO: response: 00:34:56.715 { 00:34:56.715 "jsonrpc": "2.0", 00:34:56.715 "id": 1, 00:34:56.715 "result": true 00:34:56.715 } 00:34:56.715 00:34:56.715 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.716 05:54:50 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:56.716 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.716 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.716 INFO: Setting log level to 20 00:34:56.716 INFO: Setting log level to 20 00:34:56.716 INFO: Log level set to 20 00:34:56.716 INFO: Log level set to 20 00:34:56.716 INFO: Requests: 00:34:56.716 { 00:34:56.716 "jsonrpc": "2.0", 00:34:56.716 "method": "framework_start_init", 00:34:56.716 "id": 1 00:34:56.716 } 00:34:56.716 00:34:56.716 INFO: Requests: 00:34:56.716 { 00:34:56.716 "jsonrpc": "2.0", 00:34:56.716 "method": "framework_start_init", 00:34:56.716 "id": 1 00:34:56.716 } 00:34:56.716 00:34:56.973 [2024-07-25 05:54:50.420679] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:56.973 INFO: response: 00:34:56.973 { 00:34:56.973 "jsonrpc": "2.0", 00:34:56.973 "id": 1, 00:34:56.973 "result": true 00:34:56.973 } 00:34:56.973 00:34:56.973 INFO: response: 00:34:56.973 { 00:34:56.973 "jsonrpc": "2.0", 00:34:56.973 "id": 1, 00:34:56.973 "result": true 00:34:56.973 } 00:34:56.973 00:34:56.973 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.973 05:54:50 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:56.973 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.973 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.973 INFO: Setting log level to 40 00:34:56.973 INFO: Setting log level to 40 00:34:56.973 INFO: Setting log level to 40 00:34:56.973 [2024-07-25 05:54:50.430553] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.973 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.973 05:54:50 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:56.973 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:56.973 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.973 05:54:50 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:56.973 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.973 05:54:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.248 Nvme0n1 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.248 [2024-07-25 05:54:53.313561] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.248 [ 00:35:00.248 { 00:35:00.248 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:00.248 "subtype": "Discovery", 00:35:00.248 "listen_addresses": [], 00:35:00.248 "allow_any_host": true, 00:35:00.248 "hosts": [] 00:35:00.248 }, 00:35:00.248 { 00:35:00.248 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:00.248 "subtype": "NVMe", 00:35:00.248 "listen_addresses": [ 00:35:00.248 { 00:35:00.248 "trtype": "TCP", 00:35:00.248 "adrfam": "IPv4", 00:35:00.248 "traddr": "10.0.0.2", 00:35:00.248 "trsvcid": "4420" 00:35:00.248 } 00:35:00.248 ], 00:35:00.248 "allow_any_host": true, 00:35:00.248 "hosts": [], 00:35:00.248 "serial_number": "SPDK00000000000001", 00:35:00.248 "model_number": "SPDK bdev Controller", 00:35:00.248 "max_namespaces": 1, 00:35:00.248 "min_cntlid": 1, 00:35:00.248 "max_cntlid": 65519, 00:35:00.248 "namespaces": [ 00:35:00.248 { 00:35:00.248 "nsid": 1, 00:35:00.248 "bdev_name": "Nvme0n1", 00:35:00.248 "name": "Nvme0n1", 00:35:00.248 "nguid": "112029ABC71E43F5B8B8FCE3025C9E52", 00:35:00.248 "uuid": "112029ab-c71e-43f5-b8b8-fce3025c9e52" 00:35:00.248 } 00:35:00.248 ] 00:35:00.248 } 00:35:00.248 ] 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:00.248 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:00.248 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.248 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.248 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:00.249 05:54:53 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:00.249 05:54:53 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:00.249 05:54:53 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:00.249 05:54:53 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:00.249 05:54:53 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:00.249 05:54:53 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:00.249 05:54:53 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:00.249 rmmod nvme_tcp 00:35:00.249 rmmod nvme_fabrics 00:35:00.249 rmmod nvme_keyring 00:35:00.249 05:54:53 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:00.249 05:54:53 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:00.249 05:54:53 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:00.249 05:54:53 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1788305 ']' 00:35:00.249 05:54:53 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1788305 00:35:00.249 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1788305 ']' 00:35:00.249 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1788305 00:35:00.249 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:35:00.249 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:00.249 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1788305 00:35:00.249 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:00.249 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:00.249 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1788305' 00:35:00.249 killing process with pid 1788305 00:35:00.249 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1788305 00:35:00.249 05:54:53 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1788305 00:35:01.621 05:54:55 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:01.621 05:54:55 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:01.621 05:54:55 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:01.621 05:54:55 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:01.621 05:54:55 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:01.621 05:54:55 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.621 05:54:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:01.621 05:54:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.149 05:54:57 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:04.149 00:35:04.149 real 0m18.045s 00:35:04.149 user 0m26.572s 00:35:04.149 sys 0m2.332s 00:35:04.149 05:54:57 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:04.149 05:54:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.149 ************************************ 00:35:04.149 END TEST nvmf_identify_passthru 00:35:04.149 ************************************ 00:35:04.149 05:54:57 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:04.149 05:54:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:04.149 05:54:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:04.149 05:54:57 -- common/autotest_common.sh@10 -- # set +x 00:35:04.149 ************************************ 00:35:04.149 START TEST nvmf_dif 00:35:04.149 ************************************ 00:35:04.149 05:54:57 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:04.149 * Looking for test storage... 00:35:04.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:04.149 05:54:57 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.149 05:54:57 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.149 05:54:57 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.149 05:54:57 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.149 05:54:57 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.149 05:54:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.149 05:54:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.150 05:54:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.150 05:54:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:04.150 05:54:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:04.150 05:54:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:04.150 05:54:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:04.150 05:54:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:04.150 05:54:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:04.150 05:54:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.150 05:54:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:04.150 05:54:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:04.150 05:54:57 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:04.150 05:54:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.048 05:54:59 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:06.048 05:54:59 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:06.048 05:54:59 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:06.048 05:54:59 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:06.049 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:06.049 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:06.049 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:06.049 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:06.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:06.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:35:06.049 00:35:06.049 --- 10.0.0.2 ping statistics --- 00:35:06.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.049 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:06.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:06.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:35:06.049 00:35:06.049 --- 10.0.0.1 ping statistics --- 00:35:06.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.049 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:06.049 05:54:59 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:06.984 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:06.984 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:06.984 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:06.984 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:06.984 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:06.984 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:06.984 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:06.984 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:06.984 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:06.984 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:06.984 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:06.984 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:06.984 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:06.984 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:06.984 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:06.984 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:06.985 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:07.267 05:55:00 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:07.267 05:55:00 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:07.267 05:55:00 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:07.267 05:55:00 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:07.267 05:55:00 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:07.267 05:55:00 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:07.267 05:55:00 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:07.267 05:55:00 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:07.267 05:55:00 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:07.267 05:55:00 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:07.267 05:55:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:07.267 05:55:00 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1791446 00:35:07.267 05:55:00 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:07.267 05:55:00 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1791446 00:35:07.267 05:55:00 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1791446 ']' 00:35:07.267 05:55:00 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:07.267 05:55:00 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:07.267 05:55:00 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:07.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:07.267 05:55:00 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:07.267 05:55:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:07.267 [2024-07-25 05:55:00.846591] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:35:07.267 [2024-07-25 05:55:00.846685] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:07.267 EAL: No free 2048 kB hugepages reported on node 1 00:35:07.267 [2024-07-25 05:55:00.914762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.524 [2024-07-25 05:55:01.003670] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:07.524 [2024-07-25 05:55:01.003733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:07.524 [2024-07-25 05:55:01.003759] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:07.524 [2024-07-25 05:55:01.003773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:07.524 [2024-07-25 05:55:01.003786] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:07.524 [2024-07-25 05:55:01.003816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:07.524 05:55:01 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:07.524 05:55:01 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:35:07.524 05:55:01 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:07.524 05:55:01 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:07.524 05:55:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:07.524 05:55:01 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:07.524 05:55:01 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:07.524 05:55:01 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:07.524 05:55:01 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.524 05:55:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:07.524 [2024-07-25 05:55:01.145842] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:07.524 05:55:01 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.524 05:55:01 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:07.524 05:55:01 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:07.524 05:55:01 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:07.524 05:55:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:07.524 ************************************ 00:35:07.524 START TEST fio_dif_1_default 00:35:07.525 ************************************ 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:07.525 bdev_null0 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:07.525 [2024-07-25 05:55:01.202131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:07.525 { 00:35:07.525 "params": { 00:35:07.525 "name": "Nvme$subsystem", 00:35:07.525 "trtype": "$TEST_TRANSPORT", 00:35:07.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:07.525 "adrfam": "ipv4", 00:35:07.525 "trsvcid": "$NVMF_PORT", 00:35:07.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:07.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:07.525 "hdgst": ${hdgst:-false}, 00:35:07.525 "ddgst": ${ddgst:-false} 00:35:07.525 }, 00:35:07.525 "method": "bdev_nvme_attach_controller" 00:35:07.525 } 00:35:07.525 EOF 00:35:07.525 )") 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:07.525 05:55:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:07.525 "params": { 00:35:07.525 "name": "Nvme0", 00:35:07.525 "trtype": "tcp", 00:35:07.525 "traddr": "10.0.0.2", 00:35:07.525 "adrfam": "ipv4", 00:35:07.525 "trsvcid": "4420", 00:35:07.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:07.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:07.525 "hdgst": false, 00:35:07.525 "ddgst": false 00:35:07.525 }, 00:35:07.525 "method": "bdev_nvme_attach_controller" 00:35:07.525 }' 00:35:07.783 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:07.783 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:07.783 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:07.783 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:07.783 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:07.783 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:07.783 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:07.783 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:07.783 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:07.783 05:55:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:07.783 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:07.783 fio-3.35 00:35:07.783 Starting 1 thread 00:35:08.041 EAL: No free 2048 kB hugepages reported on node 1 00:35:20.239 00:35:20.239 filename0: (groupid=0, jobs=1): err= 0: pid=1791675: Thu Jul 25 05:55:11 2024 00:35:20.239 read: IOPS=141, BW=568KiB/s (581kB/s)(5680KiB/10005msec) 00:35:20.239 slat (nsec): min=4263, max=31863, avg=9201.94, stdev=2415.40 00:35:20.239 clat (usec): min=724, max=48672, avg=28153.65, stdev=18803.59 00:35:20.239 lat (usec): min=731, max=48686, avg=28162.85, stdev=18803.57 00:35:20.239 clat percentiles (usec): 00:35:20.239 | 1.00th=[ 758], 5.00th=[ 775], 10.00th=[ 799], 20.00th=[ 848], 00:35:20.239 | 30.00th=[ 914], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:20.239 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:20.239 | 99.00th=[41157], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:35:20.239 | 99.99th=[48497] 00:35:20.239 bw ( KiB/s): min= 384, max= 768, per=99.70%, avg=566.40, stdev=181.05, samples=20 00:35:20.239 iops : min= 96, max= 192, avg=141.60, stdev=45.26, samples=20 00:35:20.239 lat (usec) : 750=0.56%, 1000=31.55% 00:35:20.239 lat (msec) : 50=67.89% 00:35:20.239 cpu : usr=90.12%, sys=9.62%, ctx=14, majf=0, minf=236 00:35:20.239 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.239 issued rwts: total=1420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.239 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:20.239 00:35:20.239 Run status group 0 (all jobs): 00:35:20.239 READ: bw=568KiB/s (581kB/s), 568KiB/s-568KiB/s (581kB/s-581kB/s), io=5680KiB (5816kB), run=10005-10005msec 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.239 00:35:20.239 real 0m10.969s 00:35:20.239 user 0m10.053s 00:35:20.239 sys 0m1.209s 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:20.239 ************************************ 00:35:20.239 END TEST fio_dif_1_default 00:35:20.239 ************************************ 00:35:20.239 05:55:12 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:20.239 05:55:12 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:20.239 05:55:12 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:20.239 05:55:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:20.239 ************************************ 00:35:20.239 START TEST fio_dif_1_multi_subsystems 00:35:20.239 ************************************ 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.239 bdev_null0 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.239 [2024-07-25 05:55:12.222501] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:20.239 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.240 bdev_null1 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:20.240 { 00:35:20.240 "params": { 00:35:20.240 "name": "Nvme$subsystem", 00:35:20.240 "trtype": "$TEST_TRANSPORT", 00:35:20.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.240 "adrfam": "ipv4", 00:35:20.240 "trsvcid": "$NVMF_PORT", 00:35:20.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.240 "hdgst": ${hdgst:-false}, 00:35:20.240 "ddgst": ${ddgst:-false} 00:35:20.240 }, 00:35:20.240 "method": "bdev_nvme_attach_controller" 00:35:20.240 } 00:35:20.240 EOF 00:35:20.240 )") 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:20.240 { 00:35:20.240 "params": { 00:35:20.240 "name": "Nvme$subsystem", 00:35:20.240 "trtype": "$TEST_TRANSPORT", 00:35:20.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.240 "adrfam": "ipv4", 00:35:20.240 "trsvcid": "$NVMF_PORT", 00:35:20.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.240 "hdgst": ${hdgst:-false}, 00:35:20.240 "ddgst": ${ddgst:-false} 00:35:20.240 }, 00:35:20.240 "method": "bdev_nvme_attach_controller" 00:35:20.240 } 00:35:20.240 EOF 00:35:20.240 )") 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:20.240 "params": { 00:35:20.240 "name": "Nvme0", 00:35:20.240 "trtype": "tcp", 00:35:20.240 "traddr": "10.0.0.2", 00:35:20.240 "adrfam": "ipv4", 00:35:20.240 "trsvcid": "4420", 00:35:20.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.240 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:20.240 "hdgst": false, 00:35:20.240 "ddgst": false 00:35:20.240 }, 00:35:20.240 "method": "bdev_nvme_attach_controller" 00:35:20.240 },{ 00:35:20.240 "params": { 00:35:20.240 "name": "Nvme1", 00:35:20.240 "trtype": "tcp", 00:35:20.240 "traddr": "10.0.0.2", 00:35:20.240 "adrfam": "ipv4", 00:35:20.240 "trsvcid": "4420", 00:35:20.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:20.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:20.240 "hdgst": false, 00:35:20.240 "ddgst": false 00:35:20.240 }, 00:35:20.240 "method": "bdev_nvme_attach_controller" 00:35:20.240 }' 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:20.240 05:55:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.240 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:20.240 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:20.240 fio-3.35 00:35:20.240 Starting 2 threads 00:35:20.240 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.206 00:35:30.206 filename0: (groupid=0, jobs=1): err= 0: pid=1793073: Thu Jul 25 05:55:23 2024 00:35:30.206 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10003msec) 00:35:30.206 slat (nsec): min=7166, max=43279, avg=9417.75, stdev=3426.58 00:35:30.206 clat (usec): min=717, max=48491, avg=21073.52, stdev=20191.34 00:35:30.206 lat (usec): min=726, max=48509, avg=21082.94, stdev=20191.00 00:35:30.206 clat percentiles (usec): 00:35:30.206 | 1.00th=[ 742], 5.00th=[ 775], 10.00th=[ 783], 20.00th=[ 799], 00:35:30.206 | 30.00th=[ 807], 40.00th=[ 816], 50.00th=[41157], 60.00th=[41157], 00:35:30.206 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:30.206 | 99.00th=[41157], 99.50th=[41157], 99.90th=[48497], 99.95th=[48497], 00:35:30.206 | 99.99th=[48497] 00:35:30.206 bw ( KiB/s): min= 672, max= 768, per=57.38%, avg=759.58, stdev=25.78, samples=19 00:35:30.206 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:35:30.206 lat (usec) : 750=1.27%, 1000=48.31% 00:35:30.206 lat (msec) : 2=0.21%, 50=50.21% 00:35:30.206 cpu : usr=94.14%, sys=5.57%, ctx=8, majf=0, minf=151 00:35:30.206 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:30.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.206 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:30.206 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:30.206 filename1: (groupid=0, jobs=1): err= 0: pid=1793074: Thu Jul 25 05:55:23 2024 00:35:30.206 read: IOPS=141, BW=565KiB/s (578kB/s)(5648KiB/10004msec) 00:35:30.206 slat (nsec): min=5758, max=36504, avg=9414.11, stdev=3463.47 00:35:30.206 clat (usec): min=688, max=48527, avg=28309.57, stdev=19006.73 00:35:30.206 lat (usec): min=695, max=48540, avg=28318.98, stdev=19006.92 00:35:30.206 clat percentiles (usec): 00:35:30.206 | 1.00th=[ 709], 5.00th=[ 725], 10.00th=[ 750], 20.00th=[ 799], 00:35:30.206 | 30.00th=[ 1029], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:30.206 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:30.206 | 99.00th=[42206], 99.50th=[42730], 99.90th=[48497], 99.95th=[48497], 00:35:30.206 | 99.99th=[48497] 00:35:30.206 bw ( KiB/s): min= 384, max= 768, per=42.57%, avg=563.20, stdev=180.54, samples=20 00:35:30.206 iops : min= 96, max= 192, avg=140.80, stdev=45.14, samples=20 00:35:30.206 lat (usec) : 750=10.48%, 1000=17.92% 00:35:30.206 lat (msec) : 2=3.90%, 50=67.71% 00:35:30.206 cpu : usr=94.61%, sys=5.09%, ctx=15, majf=0, minf=86 00:35:30.206 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:30.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.206 issued rwts: total=1412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:30.206 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:30.206 00:35:30.206 Run status group 0 (all jobs): 00:35:30.206 READ: bw=1323KiB/s (1354kB/s), 565KiB/s-758KiB/s (578kB/s-776kB/s), io=12.9MiB (13.5MB), run=10003-10004msec 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.206 00:35:30.206 real 0m11.293s 00:35:30.206 user 0m20.169s 00:35:30.206 sys 0m1.375s 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:30.206 05:55:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.206 ************************************ 00:35:30.206 END TEST fio_dif_1_multi_subsystems 00:35:30.206 ************************************ 00:35:30.206 05:55:23 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:30.206 05:55:23 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:30.207 05:55:23 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:30.207 05:55:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.207 ************************************ 00:35:30.207 START TEST fio_dif_rand_params 00:35:30.207 ************************************ 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.207 bdev_null0 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.207 [2024-07-25 05:55:23.556063] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:30.207 { 00:35:30.207 "params": { 00:35:30.207 "name": "Nvme$subsystem", 00:35:30.207 "trtype": "$TEST_TRANSPORT", 00:35:30.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.207 "adrfam": "ipv4", 00:35:30.207 "trsvcid": "$NVMF_PORT", 00:35:30.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.207 "hdgst": ${hdgst:-false}, 00:35:30.207 "ddgst": ${ddgst:-false} 00:35:30.207 }, 00:35:30.207 "method": "bdev_nvme_attach_controller" 00:35:30.207 } 00:35:30.207 EOF 00:35:30.207 )") 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:30.207 "params": { 00:35:30.207 "name": "Nvme0", 00:35:30.207 "trtype": "tcp", 00:35:30.207 "traddr": "10.0.0.2", 00:35:30.207 "adrfam": "ipv4", 00:35:30.207 "trsvcid": "4420", 00:35:30.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:30.207 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:30.207 "hdgst": false, 00:35:30.207 "ddgst": false 00:35:30.207 }, 00:35:30.207 "method": "bdev_nvme_attach_controller" 00:35:30.207 }' 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:30.207 05:55:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.207 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:30.207 ... 00:35:30.207 fio-3.35 00:35:30.207 Starting 3 threads 00:35:30.207 EAL: No free 2048 kB hugepages reported on node 1 00:35:36.761 00:35:36.761 filename0: (groupid=0, jobs=1): err= 0: pid=1794470: Thu Jul 25 05:55:29 2024 00:35:36.761 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(132MiB/5006msec) 00:35:36.761 slat (nsec): min=5046, max=55693, avg=15938.25, stdev=5104.19 00:35:36.761 clat (usec): min=5706, max=90843, avg=14158.58, stdev=12235.66 00:35:36.761 lat (usec): min=5719, max=90862, avg=14174.52, stdev=12235.82 00:35:36.761 clat percentiles (usec): 00:35:36.761 | 1.00th=[ 5866], 5.00th=[ 6587], 10.00th=[ 7963], 20.00th=[ 8717], 00:35:36.761 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[11600], 00:35:36.761 | 70.00th=[12387], 80.00th=[13435], 90.00th=[15401], 95.00th=[51643], 00:35:36.761 | 99.00th=[54264], 99.50th=[55313], 99.90th=[55837], 99.95th=[90702], 00:35:36.761 | 99.99th=[90702] 00:35:36.761 bw ( KiB/s): min=19968, max=35328, per=32.85%, avg=27033.60, stdev=5846.67, samples=10 00:35:36.761 iops : min= 156, max= 276, avg=211.20, stdev=45.68, samples=10 00:35:36.761 lat (msec) : 10=45.99%, 20=45.04%, 50=1.70%, 100=7.27% 00:35:36.761 cpu : usr=94.37%, sys=5.19%, ctx=16, majf=0, minf=112 00:35:36.761 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.761 issued rwts: total=1059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.761 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.761 filename0: (groupid=0, jobs=1): err= 0: pid=1794471: Thu Jul 25 05:55:29 2024 00:35:36.761 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(149MiB/5005msec) 00:35:36.761 slat (nsec): min=5128, max=39408, avg=14681.11, stdev=4046.71 00:35:36.761 clat (usec): min=4801, max=55959, avg=12615.04, stdev=10420.03 00:35:36.761 lat (usec): min=4815, max=55975, avg=12629.72, stdev=10420.14 00:35:36.761 clat percentiles (usec): 00:35:36.761 | 1.00th=[ 5211], 5.00th=[ 5669], 10.00th=[ 5997], 20.00th=[ 8029], 00:35:36.761 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[11076], 00:35:36.761 | 70.00th=[12256], 80.00th=[13042], 90.00th=[14615], 95.00th=[49546], 00:35:36.761 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54789], 99.95th=[55837], 00:35:36.761 | 99.99th=[55837] 00:35:36.761 bw ( KiB/s): min=24064, max=37632, per=36.90%, avg=30366.90, stdev=4562.59, samples=10 00:35:36.761 iops : min= 188, max= 294, avg=237.20, stdev=35.68, samples=10 00:35:36.761 lat (msec) : 10=52.10%, 20=41.58%, 50=1.85%, 100=4.46% 00:35:36.761 cpu : usr=94.22%, sys=5.32%, ctx=9, majf=0, minf=85 00:35:36.761 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.761 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.761 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.761 filename0: (groupid=0, jobs=1): err= 0: pid=1794472: Thu Jul 25 05:55:29 2024 00:35:36.761 read: IOPS=196, BW=24.5MiB/s (25.7MB/s)(123MiB/5029msec) 00:35:36.761 slat (nsec): min=5452, max=36333, avg=14819.21, stdev=3611.91 00:35:36.761 clat (usec): min=5434, max=91602, avg=15281.26, stdev=13697.50 00:35:36.761 lat (usec): min=5447, max=91616, avg=15296.07, stdev=13697.45 00:35:36.761 clat percentiles (usec): 00:35:36.761 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 7635], 20.00th=[ 8455], 00:35:36.761 | 30.00th=[ 8979], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[11863], 00:35:36.761 | 70.00th=[12780], 80.00th=[13960], 90.00th=[49021], 95.00th=[52691], 00:35:36.761 | 99.00th=[54789], 99.50th=[55313], 99.90th=[91751], 99.95th=[91751], 00:35:36.761 | 99.99th=[91751] 00:35:36.761 bw ( KiB/s): min=15872, max=36864, per=30.58%, avg=25164.80, stdev=6627.55, samples=10 00:35:36.761 iops : min= 124, max= 288, avg=196.60, stdev=51.78, samples=10 00:35:36.761 lat (msec) : 10=42.49%, 20=46.15%, 50=2.23%, 100=9.13% 00:35:36.761 cpu : usr=94.31%, sys=4.89%, ctx=27, majf=0, minf=110 00:35:36.761 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.761 issued rwts: total=986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.761 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.761 00:35:36.761 Run status group 0 (all jobs): 00:35:36.761 READ: bw=80.4MiB/s (84.3MB/s), 24.5MiB/s-29.7MiB/s (25.7MB/s-31.1MB/s), io=404MiB (424MB), run=5005-5029msec 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.761 bdev_null0 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.761 [2024-07-25 05:55:29.770361] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.761 bdev_null1 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:36.761 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.762 bdev_null2 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:36.762 { 00:35:36.762 "params": { 00:35:36.762 "name": "Nvme$subsystem", 00:35:36.762 "trtype": "$TEST_TRANSPORT", 00:35:36.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.762 "adrfam": "ipv4", 00:35:36.762 "trsvcid": "$NVMF_PORT", 00:35:36.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.762 "hdgst": ${hdgst:-false}, 00:35:36.762 "ddgst": ${ddgst:-false} 00:35:36.762 }, 00:35:36.762 "method": "bdev_nvme_attach_controller" 00:35:36.762 } 00:35:36.762 EOF 00:35:36.762 )") 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:36.762 { 00:35:36.762 "params": { 00:35:36.762 "name": "Nvme$subsystem", 00:35:36.762 "trtype": "$TEST_TRANSPORT", 00:35:36.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.762 "adrfam": "ipv4", 00:35:36.762 "trsvcid": "$NVMF_PORT", 00:35:36.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.762 "hdgst": ${hdgst:-false}, 00:35:36.762 "ddgst": ${ddgst:-false} 00:35:36.762 }, 00:35:36.762 "method": "bdev_nvme_attach_controller" 00:35:36.762 } 00:35:36.762 EOF 00:35:36.762 )") 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:36.762 { 00:35:36.762 "params": { 00:35:36.762 "name": "Nvme$subsystem", 00:35:36.762 "trtype": "$TEST_TRANSPORT", 00:35:36.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.762 "adrfam": "ipv4", 00:35:36.762 "trsvcid": "$NVMF_PORT", 00:35:36.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.762 "hdgst": ${hdgst:-false}, 00:35:36.762 "ddgst": ${ddgst:-false} 00:35:36.762 }, 00:35:36.762 "method": "bdev_nvme_attach_controller" 00:35:36.762 } 00:35:36.762 EOF 00:35:36.762 )") 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:36.762 "params": { 00:35:36.762 "name": "Nvme0", 00:35:36.762 "trtype": "tcp", 00:35:36.762 "traddr": "10.0.0.2", 00:35:36.762 "adrfam": "ipv4", 00:35:36.762 "trsvcid": "4420", 00:35:36.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:36.762 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:36.762 "hdgst": false, 00:35:36.762 "ddgst": false 00:35:36.762 }, 00:35:36.762 "method": "bdev_nvme_attach_controller" 00:35:36.762 },{ 00:35:36.762 "params": { 00:35:36.762 "name": "Nvme1", 00:35:36.762 "trtype": "tcp", 00:35:36.762 "traddr": "10.0.0.2", 00:35:36.762 "adrfam": "ipv4", 00:35:36.762 "trsvcid": "4420", 00:35:36.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:36.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:36.762 "hdgst": false, 00:35:36.762 "ddgst": false 00:35:36.762 }, 00:35:36.762 "method": "bdev_nvme_attach_controller" 00:35:36.762 },{ 00:35:36.762 "params": { 00:35:36.762 "name": "Nvme2", 00:35:36.762 "trtype": "tcp", 00:35:36.762 "traddr": "10.0.0.2", 00:35:36.762 "adrfam": "ipv4", 00:35:36.762 "trsvcid": "4420", 00:35:36.762 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:36.762 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:36.762 "hdgst": false, 00:35:36.762 "ddgst": false 00:35:36.762 }, 00:35:36.762 "method": "bdev_nvme_attach_controller" 00:35:36.762 }' 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:36.762 05:55:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.762 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:36.762 ... 00:35:36.762 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:36.762 ... 00:35:36.762 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:36.762 ... 00:35:36.762 fio-3.35 00:35:36.762 Starting 24 threads 00:35:36.762 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.955 00:35:48.955 filename0: (groupid=0, jobs=1): err= 0: pid=1795333: Thu Jul 25 05:55:41 2024 00:35:48.955 read: IOPS=470, BW=1880KiB/s (1925kB/s)(18.4MiB/10008msec) 00:35:48.955 slat (nsec): min=8498, max=92732, avg=31856.27, stdev=13313.52 00:35:48.955 clat (usec): min=27533, max=70125, avg=33749.11, stdev=1938.14 00:35:48.955 lat (usec): min=27544, max=70162, avg=33780.97, stdev=1937.71 00:35:48.955 clat percentiles (usec): 00:35:48.955 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:48.955 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:35:48.955 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.955 | 99.00th=[36963], 99.50th=[38011], 99.90th=[63701], 99.95th=[63701], 00:35:48.955 | 99.99th=[69731] 00:35:48.955 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1879.58, stdev=74.55, samples=19 00:35:48.955 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:48.955 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.955 cpu : usr=97.24%, sys=1.82%, ctx=34, majf=0, minf=45 00:35:48.955 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.955 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.955 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.955 filename0: (groupid=0, jobs=1): err= 0: pid=1795334: Thu Jul 25 05:55:41 2024 00:35:48.955 read: IOPS=470, BW=1883KiB/s (1928kB/s)(18.4MiB/10026msec) 00:35:48.955 slat (usec): min=9, max=144, avg=47.06, stdev=20.43 00:35:48.955 clat (usec): min=24363, max=50037, avg=33569.06, stdev=1253.45 00:35:48.955 lat (usec): min=24424, max=50078, avg=33616.12, stdev=1251.75 00:35:48.955 clat percentiles (usec): 00:35:48.955 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:48.955 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:48.955 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.955 | 99.00th=[37487], 99.50th=[38011], 99.90th=[50070], 99.95th=[50070], 00:35:48.955 | 99.99th=[50070] 00:35:48.955 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1881.60, stdev=73.12, samples=20 00:35:48.955 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:48.955 lat (msec) : 50=99.96%, 100=0.04% 00:35:48.955 cpu : usr=97.24%, sys=1.74%, ctx=206, majf=0, minf=41 00:35:48.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.955 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.955 filename0: (groupid=0, jobs=1): err= 0: pid=1795335: Thu Jul 25 05:55:41 2024 00:35:48.955 read: IOPS=472, BW=1889KiB/s (1935kB/s)(18.5MiB/10006msec) 00:35:48.955 slat (usec): min=7, max=120, avg=37.24, stdev=20.77 00:35:48.955 clat (usec): min=11451, max=58777, avg=33550.41, stdev=1884.83 00:35:48.955 lat (usec): min=11527, max=58799, avg=33587.64, stdev=1880.34 00:35:48.955 clat percentiles (usec): 00:35:48.955 | 1.00th=[30016], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:48.955 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:35:48.955 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.955 | 99.00th=[36963], 99.50th=[38536], 99.90th=[55837], 99.95th=[55837], 00:35:48.955 | 99.99th=[58983] 00:35:48.955 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1884.00, stdev=57.31, samples=20 00:35:48.955 iops : min= 448, max= 480, avg=471.00, stdev=14.33, samples=20 00:35:48.955 lat (msec) : 20=0.38%, 50=99.41%, 100=0.21% 00:35:48.955 cpu : usr=95.12%, sys=2.98%, ctx=349, majf=0, minf=55 00:35:48.955 IO depths : 1=5.8%, 2=12.0%, 4=24.8%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:48.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.955 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.955 issued rwts: total=4726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.955 filename0: (groupid=0, jobs=1): err= 0: pid=1795336: Thu Jul 25 05:55:41 2024 00:35:48.955 read: IOPS=470, BW=1883KiB/s (1928kB/s)(18.4MiB/10026msec) 00:35:48.955 slat (usec): min=13, max=153, avg=47.17, stdev=17.61 00:35:48.955 clat (usec): min=26763, max=50102, avg=33591.56, stdev=1217.70 00:35:48.955 lat (usec): min=26821, max=50143, avg=33638.72, stdev=1215.29 00:35:48.955 clat percentiles (usec): 00:35:48.955 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:48.955 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:48.955 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.955 | 99.00th=[37487], 99.50th=[38011], 99.90th=[50070], 99.95th=[50070], 00:35:48.955 | 99.99th=[50070] 00:35:48.955 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1881.60, stdev=73.12, samples=20 00:35:48.955 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:48.955 lat (msec) : 50=99.92%, 100=0.08% 00:35:48.955 cpu : usr=97.64%, sys=1.97%, ctx=27, majf=0, minf=50 00:35:48.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:48.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.955 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.955 filename0: (groupid=0, jobs=1): err= 0: pid=1795337: Thu Jul 25 05:55:41 2024 00:35:48.955 read: IOPS=474, BW=1897KiB/s (1942kB/s)(18.6MiB/10022msec) 00:35:48.955 slat (usec): min=7, max=112, avg=29.13, stdev=24.75 00:35:48.955 clat (usec): min=11746, max=51755, avg=33490.73, stdev=2300.72 00:35:48.955 lat (usec): min=11767, max=51829, avg=33519.86, stdev=2298.38 00:35:48.955 clat percentiles (usec): 00:35:48.955 | 1.00th=[25035], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:48.955 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:35:48.955 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.955 | 99.00th=[36439], 99.50th=[38011], 99.90th=[50594], 99.95th=[51643], 00:35:48.955 | 99.99th=[51643] 00:35:48.955 bw ( KiB/s): min= 1792, max= 2032, per=4.19%, avg=1894.40, stdev=63.87, samples=20 00:35:48.955 iops : min= 448, max= 508, avg=473.60, stdev=15.97, samples=20 00:35:48.955 lat (msec) : 20=0.88%, 50=98.95%, 100=0.17% 00:35:48.955 cpu : usr=97.90%, sys=1.67%, ctx=21, majf=0, minf=69 00:35:48.955 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.955 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.955 filename0: (groupid=0, jobs=1): err= 0: pid=1795338: Thu Jul 25 05:55:41 2024 00:35:48.955 read: IOPS=470, BW=1883KiB/s (1928kB/s)(18.4MiB/10026msec) 00:35:48.955 slat (usec): min=10, max=144, avg=56.22, stdev=22.30 00:35:48.955 clat (usec): min=22890, max=50570, avg=33512.89, stdev=1468.21 00:35:48.955 lat (usec): min=22901, max=50591, avg=33569.10, stdev=1463.97 00:35:48.955 clat percentiles (usec): 00:35:48.955 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:35:48.955 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:48.955 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.955 | 99.00th=[37487], 99.50th=[42730], 99.90th=[50594], 99.95th=[50594], 00:35:48.955 | 99.99th=[50594] 00:35:48.955 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1881.60, stdev=73.12, samples=20 00:35:48.955 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:48.955 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.955 cpu : usr=98.18%, sys=1.37%, ctx=14, majf=0, minf=40 00:35:48.955 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.955 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.955 filename0: (groupid=0, jobs=1): err= 0: pid=1795339: Thu Jul 25 05:55:41 2024 00:35:48.955 read: IOPS=470, BW=1881KiB/s (1926kB/s)(18.4MiB/10003msec) 00:35:48.955 slat (usec): min=9, max=103, avg=44.88, stdev=17.59 00:35:48.955 clat (usec): min=26742, max=65333, avg=33625.93, stdev=1967.55 00:35:48.955 lat (usec): min=26776, max=65360, avg=33670.81, stdev=1966.65 00:35:48.955 clat percentiles (usec): 00:35:48.955 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:48.955 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:48.955 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:48.955 | 99.00th=[36963], 99.50th=[37487], 99.90th=[65274], 99.95th=[65274], 00:35:48.956 | 99.99th=[65274] 00:35:48.956 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1879.58, stdev=74.55, samples=19 00:35:48.956 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:48.956 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.956 cpu : usr=98.24%, sys=1.37%, ctx=12, majf=0, minf=41 00:35:48.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:48.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.956 filename0: (groupid=0, jobs=1): err= 0: pid=1795340: Thu Jul 25 05:55:41 2024 00:35:48.956 read: IOPS=470, BW=1881KiB/s (1926kB/s)(18.4MiB/10002msec) 00:35:48.956 slat (usec): min=14, max=100, avg=47.47, stdev=15.80 00:35:48.956 clat (usec): min=26867, max=65020, avg=33576.10, stdev=1953.85 00:35:48.956 lat (usec): min=26893, max=65060, avg=33623.57, stdev=1953.45 00:35:48.956 clat percentiles (usec): 00:35:48.956 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:48.956 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:48.956 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:48.956 | 99.00th=[36439], 99.50th=[37487], 99.90th=[64750], 99.95th=[64750], 00:35:48.956 | 99.99th=[65274] 00:35:48.956 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1879.74, stdev=74.07, samples=19 00:35:48.956 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:48.956 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.956 cpu : usr=97.80%, sys=1.65%, ctx=68, majf=0, minf=46 00:35:48.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:48.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.956 filename1: (groupid=0, jobs=1): err= 0: pid=1795341: Thu Jul 25 05:55:41 2024 00:35:48.956 read: IOPS=470, BW=1881KiB/s (1926kB/s)(18.4MiB/10003msec) 00:35:48.956 slat (nsec): min=14383, max=83454, avg=40433.89, stdev=11442.83 00:35:48.956 clat (usec): min=26886, max=64683, avg=33652.82, stdev=1927.20 00:35:48.956 lat (usec): min=26917, max=64726, avg=33693.25, stdev=1926.95 00:35:48.956 clat percentiles (usec): 00:35:48.956 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:48.956 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:48.956 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:48.956 | 99.00th=[36963], 99.50th=[37487], 99.90th=[64750], 99.95th=[64750], 00:35:48.956 | 99.99th=[64750] 00:35:48.956 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1879.74, stdev=74.07, samples=19 00:35:48.956 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:48.956 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.956 cpu : usr=97.76%, sys=1.84%, ctx=27, majf=0, minf=39 00:35:48.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:48.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.956 filename1: (groupid=0, jobs=1): err= 0: pid=1795342: Thu Jul 25 05:55:41 2024 00:35:48.956 read: IOPS=473, BW=1896KiB/s (1941kB/s)(18.5MiB/10006msec) 00:35:48.956 slat (usec): min=6, max=102, avg=23.67, stdev=13.11 00:35:48.956 clat (usec): min=11935, max=40914, avg=33561.28, stdev=1558.42 00:35:48.956 lat (usec): min=11996, max=40971, avg=33584.95, stdev=1554.94 00:35:48.956 clat percentiles (usec): 00:35:48.956 | 1.00th=[25297], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:35:48.956 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:35:48.956 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.956 | 99.00th=[35914], 99.50th=[36963], 99.90th=[40633], 99.95th=[40633], 00:35:48.956 | 99.99th=[41157] 00:35:48.956 bw ( KiB/s): min= 1792, max= 1968, per=4.19%, avg=1890.40, stdev=59.25, samples=20 00:35:48.956 iops : min= 448, max= 492, avg=472.60, stdev=14.81, samples=20 00:35:48.956 lat (msec) : 20=0.23%, 50=99.77% 00:35:48.956 cpu : usr=97.78%, sys=1.75%, ctx=17, majf=0, minf=42 00:35:48.956 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:48.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 issued rwts: total=4742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.956 filename1: (groupid=0, jobs=1): err= 0: pid=1795343: Thu Jul 25 05:55:41 2024 00:35:48.956 read: IOPS=470, BW=1881KiB/s (1926kB/s)(18.4MiB/10002msec) 00:35:48.956 slat (usec): min=14, max=165, avg=66.80, stdev=23.39 00:35:48.956 clat (usec): min=25977, max=64548, avg=33409.28, stdev=1957.12 00:35:48.956 lat (usec): min=26050, max=64593, avg=33476.07, stdev=1955.28 00:35:48.956 clat percentiles (usec): 00:35:48.956 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:35:48.956 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:48.956 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:48.956 | 99.00th=[36963], 99.50th=[37487], 99.90th=[64226], 99.95th=[64226], 00:35:48.956 | 99.99th=[64750] 00:35:48.956 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1879.74, stdev=74.07, samples=19 00:35:48.956 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:48.956 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.956 cpu : usr=98.11%, sys=1.45%, ctx=15, majf=0, minf=32 00:35:48.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:48.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.956 filename1: (groupid=0, jobs=1): err= 0: pid=1795344: Thu Jul 25 05:55:41 2024 00:35:48.956 read: IOPS=470, BW=1881KiB/s (1926kB/s)(18.4MiB/10004msec) 00:35:48.956 slat (usec): min=8, max=106, avg=24.39, stdev=22.64 00:35:48.956 clat (usec): min=28193, max=59858, avg=33786.07, stdev=1684.56 00:35:48.956 lat (usec): min=28205, max=59898, avg=33810.47, stdev=1682.01 00:35:48.956 clat percentiles (usec): 00:35:48.956 | 1.00th=[32113], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:35:48.956 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:35:48.956 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.956 | 99.00th=[38011], 99.50th=[38536], 99.90th=[59507], 99.95th=[60031], 00:35:48.956 | 99.99th=[60031] 00:35:48.956 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1879.58, stdev=74.55, samples=19 00:35:48.956 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:48.956 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.956 cpu : usr=96.13%, sys=2.32%, ctx=121, majf=0, minf=83 00:35:48.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:48.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.956 filename1: (groupid=0, jobs=1): err= 0: pid=1795345: Thu Jul 25 05:55:41 2024 00:35:48.956 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10012msec) 00:35:48.956 slat (usec): min=7, max=165, avg=24.99, stdev=10.29 00:35:48.956 clat (usec): min=3861, max=38802, avg=33495.08, stdev=2464.67 00:35:48.956 lat (usec): min=3873, max=38826, avg=33520.07, stdev=2465.34 00:35:48.956 clat percentiles (usec): 00:35:48.956 | 1.00th=[24249], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:35:48.956 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:35:48.956 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.956 | 99.00th=[36439], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:35:48.956 | 99.99th=[39060] 00:35:48.956 bw ( KiB/s): min= 1792, max= 2048, per=4.19%, avg=1894.40, stdev=66.96, samples=20 00:35:48.956 iops : min= 448, max= 512, avg=473.60, stdev=16.74, samples=20 00:35:48.956 lat (msec) : 4=0.15%, 10=0.53%, 20=0.15%, 50=99.18% 00:35:48.956 cpu : usr=94.91%, sys=2.84%, ctx=101, majf=0, minf=70 00:35:48.956 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.956 filename1: (groupid=0, jobs=1): err= 0: pid=1795346: Thu Jul 25 05:55:41 2024 00:35:48.956 read: IOPS=470, BW=1883KiB/s (1928kB/s)(18.4MiB/10026msec) 00:35:48.956 slat (usec): min=8, max=116, avg=28.92, stdev=17.49 00:35:48.956 clat (usec): min=22804, max=50125, avg=33750.26, stdev=1386.67 00:35:48.956 lat (usec): min=22825, max=50165, avg=33779.18, stdev=1384.44 00:35:48.956 clat percentiles (usec): 00:35:48.956 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:48.956 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:35:48.956 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.956 | 99.00th=[37487], 99.50th=[44303], 99.90th=[50070], 99.95th=[50070], 00:35:48.956 | 99.99th=[50070] 00:35:48.956 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1881.60, stdev=73.12, samples=20 00:35:48.956 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:48.956 lat (msec) : 50=99.87%, 100=0.13% 00:35:48.956 cpu : usr=96.98%, sys=2.26%, ctx=166, majf=0, minf=61 00:35:48.956 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.957 filename1: (groupid=0, jobs=1): err= 0: pid=1795347: Thu Jul 25 05:55:41 2024 00:35:48.957 read: IOPS=470, BW=1880KiB/s (1925kB/s)(18.4MiB/10008msec) 00:35:48.957 slat (usec): min=17, max=120, avg=73.65, stdev= 9.50 00:35:48.957 clat (usec): min=27584, max=72374, avg=33376.79, stdev=2080.10 00:35:48.957 lat (usec): min=27642, max=72413, avg=33450.44, stdev=2077.80 00:35:48.957 clat percentiles (usec): 00:35:48.957 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:35:48.957 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:48.957 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:48.957 | 99.00th=[35914], 99.50th=[38011], 99.90th=[65799], 99.95th=[65799], 00:35:48.957 | 99.99th=[72877] 00:35:48.957 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1872.84, stdev=76.45, samples=19 00:35:48.957 iops : min= 416, max= 480, avg=468.21, stdev=19.11, samples=19 00:35:48.957 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.957 cpu : usr=98.22%, sys=1.34%, ctx=12, majf=0, minf=48 00:35:48.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.957 filename1: (groupid=0, jobs=1): err= 0: pid=1795348: Thu Jul 25 05:55:41 2024 00:35:48.957 read: IOPS=469, BW=1877KiB/s (1922kB/s)(18.4MiB/10023msec) 00:35:48.957 slat (nsec): min=9695, max=84961, avg=29498.23, stdev=10224.78 00:35:48.957 clat (usec): min=28266, max=81812, avg=33819.69, stdev=2867.42 00:35:48.957 lat (usec): min=28292, max=81855, avg=33849.19, stdev=2867.89 00:35:48.957 clat percentiles (usec): 00:35:48.957 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:35:48.957 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:35:48.957 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.957 | 99.00th=[36963], 99.50th=[38011], 99.90th=[81265], 99.95th=[81265], 00:35:48.957 | 99.99th=[82314] 00:35:48.957 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1875.20, stdev=75.15, samples=20 00:35:48.957 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:35:48.957 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.957 cpu : usr=94.25%, sys=3.16%, ctx=266, majf=0, minf=54 00:35:48.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.957 filename2: (groupid=0, jobs=1): err= 0: pid=1795349: Thu Jul 25 05:55:41 2024 00:35:48.957 read: IOPS=470, BW=1881KiB/s (1926kB/s)(18.4MiB/10003msec) 00:35:48.957 slat (usec): min=9, max=161, avg=45.54, stdev=18.85 00:35:48.957 clat (usec): min=26632, max=65595, avg=33606.31, stdev=1994.43 00:35:48.957 lat (usec): min=26659, max=65617, avg=33651.84, stdev=1993.08 00:35:48.957 clat percentiles (usec): 00:35:48.957 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:48.957 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:48.957 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:48.957 | 99.00th=[36963], 99.50th=[37487], 99.90th=[65274], 99.95th=[65799], 00:35:48.957 | 99.99th=[65799] 00:35:48.957 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1879.58, stdev=74.55, samples=19 00:35:48.957 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:48.957 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.957 cpu : usr=93.07%, sys=3.69%, ctx=330, majf=0, minf=58 00:35:48.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:48.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.957 filename2: (groupid=0, jobs=1): err= 0: pid=1795350: Thu Jul 25 05:55:41 2024 00:35:48.957 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:35:48.957 slat (nsec): min=7501, max=95030, avg=21060.11, stdev=17457.57 00:35:48.957 clat (usec): min=12368, max=38478, avg=33641.30, stdev=1637.86 00:35:48.957 lat (usec): min=12394, max=38511, avg=33662.36, stdev=1635.96 00:35:48.957 clat percentiles (usec): 00:35:48.957 | 1.00th=[30016], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:35:48.957 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:35:48.957 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.957 | 99.00th=[36963], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:35:48.957 | 99.99th=[38536] 00:35:48.957 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1888.00, stdev=56.87, samples=20 00:35:48.957 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:35:48.957 lat (msec) : 20=0.49%, 50=99.51% 00:35:48.957 cpu : usr=97.94%, sys=1.63%, ctx=20, majf=0, minf=67 00:35:48.957 IO depths : 1=5.8%, 2=12.0%, 4=24.7%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:48.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.957 filename2: (groupid=0, jobs=1): err= 0: pid=1795351: Thu Jul 25 05:55:41 2024 00:35:48.957 read: IOPS=470, BW=1880KiB/s (1926kB/s)(18.4MiB/10006msec) 00:35:48.957 slat (usec): min=4, max=100, avg=45.30, stdev=14.99 00:35:48.957 clat (usec): min=26824, max=77890, avg=33616.52, stdev=2219.64 00:35:48.957 lat (usec): min=26852, max=77907, avg=33661.82, stdev=2217.90 00:35:48.957 clat percentiles (usec): 00:35:48.957 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:48.957 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:48.957 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:48.957 | 99.00th=[35914], 99.50th=[37487], 99.90th=[68682], 99.95th=[68682], 00:35:48.957 | 99.99th=[78119] 00:35:48.957 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1872.84, stdev=76.45, samples=19 00:35:48.957 iops : min= 416, max= 480, avg=468.21, stdev=19.11, samples=19 00:35:48.957 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.957 cpu : usr=97.63%, sys=1.68%, ctx=122, majf=0, minf=49 00:35:48.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.957 filename2: (groupid=0, jobs=1): err= 0: pid=1795352: Thu Jul 25 05:55:41 2024 00:35:48.957 read: IOPS=469, BW=1878KiB/s (1923kB/s)(18.4MiB/10019msec) 00:35:48.957 slat (nsec): min=5820, max=67860, avg=27881.37, stdev=9535.10 00:35:48.957 clat (usec): min=28296, max=77565, avg=33826.87, stdev=2711.30 00:35:48.957 lat (usec): min=28312, max=77586, avg=33854.75, stdev=2710.25 00:35:48.957 clat percentiles (usec): 00:35:48.957 | 1.00th=[30016], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:35:48.957 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:35:48.957 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.957 | 99.00th=[38011], 99.50th=[39584], 99.90th=[77071], 99.95th=[77071], 00:35:48.957 | 99.99th=[77071] 00:35:48.957 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1875.20, stdev=73.89, samples=20 00:35:48.957 iops : min= 416, max= 480, avg=468.80, stdev=18.47, samples=20 00:35:48.957 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.957 cpu : usr=85.77%, sys=6.57%, ctx=665, majf=0, minf=52 00:35:48.957 IO depths : 1=5.8%, 2=11.9%, 4=24.5%, 8=51.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:48.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.957 filename2: (groupid=0, jobs=1): err= 0: pid=1795353: Thu Jul 25 05:55:41 2024 00:35:48.957 read: IOPS=470, BW=1883KiB/s (1928kB/s)(18.4MiB/10026msec) 00:35:48.957 slat (usec): min=8, max=116, avg=36.96, stdev=18.46 00:35:48.957 clat (usec): min=22909, max=59524, avg=33695.46, stdev=1479.37 00:35:48.957 lat (usec): min=22969, max=59544, avg=33732.42, stdev=1476.25 00:35:48.957 clat percentiles (usec): 00:35:48.957 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:48.957 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:35:48.957 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.957 | 99.00th=[37487], 99.50th=[43254], 99.90th=[50070], 99.95th=[50070], 00:35:48.957 | 99.99th=[59507] 00:35:48.957 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1881.60, stdev=73.12, samples=20 00:35:48.957 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:48.957 lat (msec) : 50=99.70%, 100=0.30% 00:35:48.957 cpu : usr=97.31%, sys=2.28%, ctx=25, majf=0, minf=47 00:35:48.957 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.957 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.957 filename2: (groupid=0, jobs=1): err= 0: pid=1795354: Thu Jul 25 05:55:41 2024 00:35:48.957 read: IOPS=470, BW=1883KiB/s (1928kB/s)(18.4MiB/10026msec) 00:35:48.957 slat (usec): min=8, max=104, avg=45.25, stdev=17.24 00:35:48.957 clat (usec): min=25049, max=50499, avg=33603.44, stdev=1250.45 00:35:48.957 lat (usec): min=25073, max=50517, avg=33648.69, stdev=1248.38 00:35:48.957 clat percentiles (usec): 00:35:48.957 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:48.957 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:48.957 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.958 | 99.00th=[37487], 99.50th=[38011], 99.90th=[50594], 99.95th=[50594], 00:35:48.958 | 99.99th=[50594] 00:35:48.958 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1881.60, stdev=73.12, samples=20 00:35:48.958 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:48.958 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.958 cpu : usr=97.64%, sys=1.97%, ctx=33, majf=0, minf=41 00:35:48.958 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.958 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.958 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.958 filename2: (groupid=0, jobs=1): err= 0: pid=1795355: Thu Jul 25 05:55:41 2024 00:35:48.958 read: IOPS=470, BW=1883KiB/s (1928kB/s)(18.4MiB/10026msec) 00:35:48.958 slat (usec): min=8, max=106, avg=42.64, stdev=17.31 00:35:48.958 clat (usec): min=23466, max=49999, avg=33625.38, stdev=1307.99 00:35:48.958 lat (usec): min=23477, max=50040, avg=33668.02, stdev=1306.12 00:35:48.958 clat percentiles (usec): 00:35:48.958 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:48.958 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:48.958 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:48.958 | 99.00th=[38011], 99.50th=[38011], 99.90th=[50070], 99.95th=[50070], 00:35:48.958 | 99.99th=[50070] 00:35:48.958 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1881.60, stdev=73.12, samples=20 00:35:48.958 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:48.958 lat (msec) : 50=100.00% 00:35:48.958 cpu : usr=90.75%, sys=4.33%, ctx=144, majf=0, minf=46 00:35:48.958 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.958 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.958 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.958 filename2: (groupid=0, jobs=1): err= 0: pid=1795356: Thu Jul 25 05:55:41 2024 00:35:48.958 read: IOPS=470, BW=1881KiB/s (1926kB/s)(18.4MiB/10004msec) 00:35:48.958 slat (usec): min=10, max=119, avg=53.15, stdev=20.47 00:35:48.958 clat (usec): min=26812, max=66042, avg=33518.03, stdev=2020.48 00:35:48.958 lat (usec): min=26830, max=66104, avg=33571.18, stdev=2019.46 00:35:48.958 clat percentiles (usec): 00:35:48.958 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:35:48.958 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:48.958 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:48.958 | 99.00th=[36439], 99.50th=[37487], 99.90th=[65799], 99.95th=[65799], 00:35:48.958 | 99.99th=[65799] 00:35:48.958 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1879.58, stdev=74.55, samples=19 00:35:48.958 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:48.958 lat (msec) : 50=99.66%, 100=0.34% 00:35:48.958 cpu : usr=98.17%, sys=1.42%, ctx=15, majf=0, minf=48 00:35:48.958 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:48.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.958 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.958 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.958 00:35:48.958 Run status group 0 (all jobs): 00:35:48.958 READ: bw=44.1MiB/s (46.2MB/s), 1877KiB/s-1899KiB/s (1922kB/s-1944kB/s), io=442MiB (464MB), run=10002-10026msec 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.958 bdev_null0 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.958 [2024-07-25 05:55:41.567826] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:48.958 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.959 bdev_null1 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:48.959 { 00:35:48.959 "params": { 00:35:48.959 "name": "Nvme$subsystem", 00:35:48.959 "trtype": "$TEST_TRANSPORT", 00:35:48.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.959 "adrfam": "ipv4", 00:35:48.959 "trsvcid": "$NVMF_PORT", 00:35:48.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.959 "hdgst": ${hdgst:-false}, 00:35:48.959 "ddgst": ${ddgst:-false} 00:35:48.959 }, 00:35:48.959 "method": "bdev_nvme_attach_controller" 00:35:48.959 } 00:35:48.959 EOF 00:35:48.959 )") 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:48.959 { 00:35:48.959 "params": { 00:35:48.959 "name": "Nvme$subsystem", 00:35:48.959 "trtype": "$TEST_TRANSPORT", 00:35:48.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.959 "adrfam": "ipv4", 00:35:48.959 "trsvcid": "$NVMF_PORT", 00:35:48.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.959 "hdgst": ${hdgst:-false}, 00:35:48.959 "ddgst": ${ddgst:-false} 00:35:48.959 }, 00:35:48.959 "method": "bdev_nvme_attach_controller" 00:35:48.959 } 00:35:48.959 EOF 00:35:48.959 )") 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:48.959 "params": { 00:35:48.959 "name": "Nvme0", 00:35:48.959 "trtype": "tcp", 00:35:48.959 "traddr": "10.0.0.2", 00:35:48.959 "adrfam": "ipv4", 00:35:48.959 "trsvcid": "4420", 00:35:48.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.959 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.959 "hdgst": false, 00:35:48.959 "ddgst": false 00:35:48.959 }, 00:35:48.959 "method": "bdev_nvme_attach_controller" 00:35:48.959 },{ 00:35:48.959 "params": { 00:35:48.959 "name": "Nvme1", 00:35:48.959 "trtype": "tcp", 00:35:48.959 "traddr": "10.0.0.2", 00:35:48.959 "adrfam": "ipv4", 00:35:48.959 "trsvcid": "4420", 00:35:48.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:48.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:48.959 "hdgst": false, 00:35:48.959 "ddgst": false 00:35:48.959 }, 00:35:48.959 "method": "bdev_nvme_attach_controller" 00:35:48.959 }' 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:48.959 05:55:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.959 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:48.959 ... 00:35:48.959 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:48.959 ... 00:35:48.959 fio-3.35 00:35:48.959 Starting 4 threads 00:35:48.959 EAL: No free 2048 kB hugepages reported on node 1 00:35:54.220 00:35:54.220 filename0: (groupid=0, jobs=1): err= 0: pid=1796733: Thu Jul 25 05:55:47 2024 00:35:54.220 read: IOPS=1841, BW=14.4MiB/s (15.1MB/s)(72.0MiB/5003msec) 00:35:54.220 slat (nsec): min=3974, max=73388, avg=16073.58, stdev=8852.43 00:35:54.220 clat (usec): min=1497, max=7731, avg=4293.00, stdev=622.17 00:35:54.220 lat (usec): min=1506, max=7745, avg=4309.08, stdev=621.47 00:35:54.220 clat percentiles (usec): 00:35:54.221 | 1.00th=[ 3130], 5.00th=[ 3589], 10.00th=[ 3752], 20.00th=[ 3916], 00:35:54.221 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:35:54.221 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5800], 00:35:54.221 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7373], 99.95th=[ 7504], 00:35:54.221 | 99.99th=[ 7701] 00:35:54.221 bw ( KiB/s): min=13952, max=15216, per=24.99%, avg=14726.40, stdev=497.93, samples=10 00:35:54.221 iops : min= 1744, max= 1902, avg=1840.80, stdev=62.24, samples=10 00:35:54.221 lat (msec) : 2=0.04%, 4=25.90%, 10=74.06% 00:35:54.221 cpu : usr=94.12%, sys=5.36%, ctx=20, majf=0, minf=115 00:35:54.221 IO depths : 1=0.1%, 2=5.3%, 4=67.8%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:54.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.221 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.221 issued rwts: total=9212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.221 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:54.221 filename0: (groupid=0, jobs=1): err= 0: pid=1796734: Thu Jul 25 05:55:47 2024 00:35:54.221 read: IOPS=1855, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5004msec) 00:35:54.221 slat (nsec): min=4009, max=67115, avg=14543.20, stdev=8288.70 00:35:54.221 clat (usec): min=886, max=7773, avg=4263.58, stdev=644.69 00:35:54.221 lat (usec): min=902, max=7785, avg=4278.12, stdev=644.67 00:35:54.221 clat percentiles (usec): 00:35:54.221 | 1.00th=[ 2835], 5.00th=[ 3425], 10.00th=[ 3621], 20.00th=[ 3884], 00:35:54.221 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4293], 00:35:54.221 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4948], 95.00th=[ 5800], 00:35:54.221 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7504], 99.95th=[ 7701], 00:35:54.221 | 99.99th=[ 7767] 00:35:54.221 bw ( KiB/s): min=13979, max=15328, per=25.19%, avg=14845.90, stdev=406.41, samples=10 00:35:54.221 iops : min= 1747, max= 1916, avg=1855.70, stdev=50.89, samples=10 00:35:54.221 lat (usec) : 1000=0.03% 00:35:54.221 lat (msec) : 2=0.01%, 4=27.86%, 10=72.09% 00:35:54.221 cpu : usr=94.84%, sys=4.68%, ctx=9, majf=0, minf=75 00:35:54.221 IO depths : 1=0.1%, 2=8.4%, 4=64.0%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:54.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.221 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.221 issued rwts: total=9285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.221 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:54.221 filename1: (groupid=0, jobs=1): err= 0: pid=1796735: Thu Jul 25 05:55:47 2024 00:35:54.221 read: IOPS=1830, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5001msec) 00:35:54.221 slat (nsec): min=3862, max=66983, avg=15784.39, stdev=8742.48 00:35:54.221 clat (usec): min=785, max=8820, avg=4321.21, stdev=634.93 00:35:54.221 lat (usec): min=798, max=8846, avg=4337.00, stdev=634.35 00:35:54.221 clat percentiles (usec): 00:35:54.221 | 1.00th=[ 3163], 5.00th=[ 3621], 10.00th=[ 3785], 20.00th=[ 3916], 00:35:54.221 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:35:54.221 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 5014], 95.00th=[ 5800], 00:35:54.221 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[ 7832], 99.95th=[ 8586], 00:35:54.221 | 99.99th=[ 8848] 00:35:54.221 bw ( KiB/s): min=13952, max=15040, per=24.78%, avg=14608.00, stdev=363.71, samples=9 00:35:54.221 iops : min= 1744, max= 1880, avg=1826.00, stdev=45.46, samples=9 00:35:54.221 lat (usec) : 1000=0.03% 00:35:54.221 lat (msec) : 2=0.01%, 4=24.43%, 10=75.53% 00:35:54.221 cpu : usr=93.70%, sys=5.84%, ctx=9, majf=0, minf=71 00:35:54.221 IO depths : 1=0.1%, 2=6.5%, 4=66.3%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:54.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.221 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.221 issued rwts: total=9153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.221 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:54.221 filename1: (groupid=0, jobs=1): err= 0: pid=1796736: Thu Jul 25 05:55:47 2024 00:35:54.221 read: IOPS=1842, BW=14.4MiB/s (15.1MB/s)(72.0MiB/5002msec) 00:35:54.221 slat (nsec): min=3925, max=70731, avg=17542.69, stdev=8938.12 00:35:54.221 clat (usec): min=801, max=7913, avg=4285.79, stdev=643.76 00:35:54.221 lat (usec): min=839, max=7928, avg=4303.33, stdev=643.08 00:35:54.221 clat percentiles (usec): 00:35:54.221 | 1.00th=[ 2999], 5.00th=[ 3556], 10.00th=[ 3752], 20.00th=[ 3884], 00:35:54.221 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4293], 00:35:54.221 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5800], 00:35:54.221 | 99.00th=[ 6652], 99.50th=[ 6915], 99.90th=[ 7504], 99.95th=[ 7767], 00:35:54.221 | 99.99th=[ 7898] 00:35:54.221 bw ( KiB/s): min=13851, max=15952, per=25.00%, avg=14732.30, stdev=638.17, samples=10 00:35:54.221 iops : min= 1731, max= 1994, avg=1841.50, stdev=79.83, samples=10 00:35:54.221 lat (usec) : 1000=0.03% 00:35:54.221 lat (msec) : 2=0.14%, 4=28.64%, 10=71.19% 00:35:54.221 cpu : usr=93.52%, sys=5.48%, ctx=162, majf=0, minf=77 00:35:54.221 IO depths : 1=0.1%, 2=6.3%, 4=66.4%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:54.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.221 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.221 issued rwts: total=9214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.221 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:54.221 00:35:54.221 Run status group 0 (all jobs): 00:35:54.221 READ: bw=57.6MiB/s (60.3MB/s), 14.3MiB/s-14.5MiB/s (15.0MB/s-15.2MB/s), io=288MiB (302MB), run=5001-5004msec 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.479 00:35:54.479 real 0m24.610s 00:35:54.479 user 4m29.238s 00:35:54.479 sys 0m8.299s 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:54.479 05:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.479 ************************************ 00:35:54.479 END TEST fio_dif_rand_params 00:35:54.479 ************************************ 00:35:54.479 05:55:48 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:54.479 05:55:48 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:54.479 05:55:48 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:54.479 05:55:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:54.737 ************************************ 00:35:54.737 START TEST fio_dif_digest 00:35:54.737 ************************************ 00:35:54.737 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:35:54.737 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:54.737 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:54.737 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:54.738 bdev_null0 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:54.738 [2024-07-25 05:55:48.220150] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:54.738 { 00:35:54.738 "params": { 00:35:54.738 "name": "Nvme$subsystem", 00:35:54.738 "trtype": "$TEST_TRANSPORT", 00:35:54.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:54.738 "adrfam": "ipv4", 00:35:54.738 "trsvcid": "$NVMF_PORT", 00:35:54.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:54.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:54.738 "hdgst": ${hdgst:-false}, 00:35:54.738 "ddgst": ${ddgst:-false} 00:35:54.738 }, 00:35:54.738 "method": "bdev_nvme_attach_controller" 00:35:54.738 } 00:35:54.738 EOF 00:35:54.738 )") 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:54.738 "params": { 00:35:54.738 "name": "Nvme0", 00:35:54.738 "trtype": "tcp", 00:35:54.738 "traddr": "10.0.0.2", 00:35:54.738 "adrfam": "ipv4", 00:35:54.738 "trsvcid": "4420", 00:35:54.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.738 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.738 "hdgst": true, 00:35:54.738 "ddgst": true 00:35:54.738 }, 00:35:54.738 "method": "bdev_nvme_attach_controller" 00:35:54.738 }' 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:54.738 05:55:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.007 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:55.007 ... 00:35:55.007 fio-3.35 00:35:55.007 Starting 3 threads 00:35:55.007 EAL: No free 2048 kB hugepages reported on node 1 00:36:07.233 00:36:07.233 filename0: (groupid=0, jobs=1): err= 0: pid=1797507: Thu Jul 25 05:55:59 2024 00:36:07.233 read: IOPS=207, BW=26.0MiB/s (27.2MB/s)(261MiB/10047msec) 00:36:07.233 slat (nsec): min=4583, max=30638, avg=14072.61, stdev=1647.45 00:36:07.233 clat (usec): min=8538, max=57223, avg=14411.71, stdev=3165.97 00:36:07.233 lat (usec): min=8552, max=57236, avg=14425.79, stdev=3165.97 00:36:07.233 clat percentiles (usec): 00:36:07.233 | 1.00th=[11338], 5.00th=[12518], 10.00th=[12911], 20.00th=[13304], 00:36:07.233 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:36:07.233 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15533], 95.00th=[16057], 00:36:07.233 | 99.00th=[17171], 99.50th=[49546], 99.90th=[56361], 99.95th=[56361], 00:36:07.233 | 99.99th=[57410] 00:36:07.233 bw ( KiB/s): min=24064, max=27703, per=33.12%, avg=26665.15, stdev=1095.99, samples=20 00:36:07.233 iops : min= 188, max= 216, avg=208.30, stdev= 8.54, samples=20 00:36:07.233 lat (msec) : 10=0.43%, 20=98.90%, 50=0.19%, 100=0.48% 00:36:07.233 cpu : usr=92.91%, sys=6.61%, ctx=15, majf=0, minf=115 00:36:07.233 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:07.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.233 issued rwts: total=2086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.233 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:07.233 filename0: (groupid=0, jobs=1): err= 0: pid=1797508: Thu Jul 25 05:55:59 2024 00:36:07.233 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(260MiB/10005msec) 00:36:07.233 slat (nsec): min=4581, max=95107, avg=14563.91, stdev=2636.38 00:36:07.233 clat (usec): min=8946, max=21966, avg=14436.74, stdev=1263.20 00:36:07.233 lat (usec): min=8961, max=21978, avg=14451.30, stdev=1263.14 00:36:07.233 clat percentiles (usec): 00:36:07.233 | 1.00th=[10159], 5.00th=[12518], 10.00th=[13042], 20.00th=[13566], 00:36:07.233 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:36:07.233 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:36:07.233 | 99.00th=[17171], 99.50th=[17695], 99.90th=[21890], 99.95th=[21890], 00:36:07.233 | 99.99th=[21890] 00:36:07.233 bw ( KiB/s): min=25856, max=27648, per=32.97%, avg=26547.20, stdev=470.58, samples=20 00:36:07.233 iops : min= 202, max= 216, avg=207.40, stdev= 3.68, samples=20 00:36:07.233 lat (msec) : 10=0.87%, 20=98.99%, 50=0.14% 00:36:07.233 cpu : usr=93.05%, sys=6.41%, ctx=22, majf=0, minf=168 00:36:07.233 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:07.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.233 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.233 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:07.233 filename0: (groupid=0, jobs=1): err= 0: pid=1797509: Thu Jul 25 05:55:59 2024 00:36:07.233 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(270MiB/10048msec) 00:36:07.233 slat (nsec): min=4540, max=27813, avg=15035.70, stdev=2235.98 00:36:07.233 clat (usec): min=8020, max=54972, avg=13909.60, stdev=2071.36 00:36:07.233 lat (usec): min=8034, max=54992, avg=13924.64, stdev=2071.40 00:36:07.233 clat percentiles (usec): 00:36:07.233 | 1.00th=[ 9896], 5.00th=[11994], 10.00th=[12518], 20.00th=[13042], 00:36:07.233 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:36:07.233 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15139], 95.00th=[15664], 00:36:07.233 | 99.00th=[16581], 99.50th=[16909], 99.90th=[54264], 99.95th=[54789], 00:36:07.233 | 99.99th=[54789] 00:36:07.233 bw ( KiB/s): min=25344, max=29184, per=34.26%, avg=27584.00, stdev=882.42, samples=20 00:36:07.233 iops : min= 198, max= 228, avg=215.50, stdev= 6.89, samples=20 00:36:07.233 lat (msec) : 10=1.02%, 20=98.75%, 50=0.09%, 100=0.14% 00:36:07.233 cpu : usr=92.02%, sys=7.44%, ctx=35, majf=0, minf=75 00:36:07.233 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:07.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.233 issued rwts: total=2158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.233 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:07.233 00:36:07.233 Run status group 0 (all jobs): 00:36:07.233 READ: bw=78.6MiB/s (82.4MB/s), 25.9MiB/s-26.8MiB/s (27.2MB/s-28.1MB/s), io=790MiB (828MB), run=10005-10048msec 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.233 00:36:07.233 real 0m11.080s 00:36:07.233 user 0m29.051s 00:36:07.233 sys 0m2.319s 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:07.233 05:55:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.233 ************************************ 00:36:07.233 END TEST fio_dif_digest 00:36:07.233 ************************************ 00:36:07.233 05:55:59 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:07.233 05:55:59 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:07.233 05:55:59 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:07.233 05:55:59 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:07.233 05:55:59 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:07.233 05:55:59 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:07.233 05:55:59 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:07.233 05:55:59 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:07.233 rmmod nvme_tcp 00:36:07.233 rmmod nvme_fabrics 00:36:07.233 rmmod nvme_keyring 00:36:07.233 05:55:59 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:07.233 05:55:59 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:07.233 05:55:59 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:07.233 05:55:59 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1791446 ']' 00:36:07.233 05:55:59 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1791446 00:36:07.233 05:55:59 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1791446 ']' 00:36:07.233 05:55:59 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1791446 00:36:07.233 05:55:59 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:36:07.233 05:55:59 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:07.233 05:55:59 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1791446 00:36:07.233 05:55:59 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:07.234 05:55:59 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:07.234 05:55:59 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1791446' 00:36:07.234 killing process with pid 1791446 00:36:07.234 05:55:59 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1791446 00:36:07.234 05:55:59 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1791446 00:36:07.234 05:55:59 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:07.234 05:55:59 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:07.234 Waiting for block devices as requested 00:36:07.234 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:07.234 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:07.234 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:07.492 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:07.492 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:07.492 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:07.492 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:07.750 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:07.750 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:07.750 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:07.750 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:08.007 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:08.007 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:08.007 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:08.007 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:08.265 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:08.265 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:08.524 05:56:01 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:08.524 05:56:01 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:08.524 05:56:01 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:08.524 05:56:01 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:08.524 05:56:01 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.524 05:56:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:08.524 05:56:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.422 05:56:04 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:10.422 00:36:10.422 real 1m6.628s 00:36:10.422 user 6m25.272s 00:36:10.422 sys 0m19.986s 00:36:10.422 05:56:04 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:10.422 05:56:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:10.422 ************************************ 00:36:10.422 END TEST nvmf_dif 00:36:10.422 ************************************ 00:36:10.422 05:56:04 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:10.422 05:56:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:10.422 05:56:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:10.422 05:56:04 -- common/autotest_common.sh@10 -- # set +x 00:36:10.422 ************************************ 00:36:10.422 START TEST nvmf_abort_qd_sizes 00:36:10.422 ************************************ 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:10.422 * Looking for test storage... 00:36:10.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:10.422 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:10.681 05:56:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:12.581 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:12.581 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:12.581 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:12.581 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:12.581 05:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:12.581 05:56:06 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:12.581 05:56:06 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:12.581 05:56:06 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:12.581 05:56:06 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:12.581 05:56:06 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:12.581 05:56:06 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:12.581 05:56:06 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:12.581 05:56:06 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:12.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:12.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:36:12.581 00:36:12.581 --- 10.0.0.2 ping statistics --- 00:36:12.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.581 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:36:12.581 05:56:06 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:12.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:12.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:36:12.581 00:36:12.581 --- 10.0.0.1 ping statistics --- 00:36:12.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.581 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:36:12.581 05:56:06 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:12.581 05:56:06 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:12.581 05:56:06 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:12.581 05:56:06 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:13.954 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:13.954 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:13.954 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:13.954 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:13.954 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:13.954 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:13.954 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:13.954 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:13.954 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:13.954 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:13.954 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:13.954 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:13.954 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:13.954 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:13.954 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:13.954 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:14.889 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1802358 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1802358 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1802358 ']' 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:14.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:14.889 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:14.889 [2024-07-25 05:56:08.545419] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:36:14.889 [2024-07-25 05:56:08.545502] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:14.889 EAL: No free 2048 kB hugepages reported on node 1 00:36:15.147 [2024-07-25 05:56:08.611095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:15.147 [2024-07-25 05:56:08.702480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:15.147 [2024-07-25 05:56:08.702540] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:15.147 [2024-07-25 05:56:08.702556] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:15.147 [2024-07-25 05:56:08.702569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:15.147 [2024-07-25 05:56:08.702581] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:15.147 [2024-07-25 05:56:08.702661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:15.147 [2024-07-25 05:56:08.702732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:15.147 [2024-07-25 05:56:08.702831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:15.147 [2024-07-25 05:56:08.702833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:15.147 05:56:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:15.405 05:56:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:15.405 05:56:08 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:15.405 05:56:08 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:15.405 05:56:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:15.405 05:56:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:15.405 05:56:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:15.405 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:15.405 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:15.405 05:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:15.405 ************************************ 00:36:15.405 START TEST spdk_target_abort 00:36:15.405 ************************************ 00:36:15.405 05:56:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:36:15.405 05:56:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:15.405 05:56:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:15.405 05:56:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.405 05:56:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:18.685 spdk_targetn1 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:18.685 [2024-07-25 05:56:11.716920] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:18.685 [2024-07-25 05:56:11.749140] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:18.685 05:56:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:18.685 EAL: No free 2048 kB hugepages reported on node 1 00:36:21.248 Initializing NVMe Controllers 00:36:21.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:21.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:21.248 Initialization complete. Launching workers. 00:36:21.248 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11771, failed: 0 00:36:21.248 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1245, failed to submit 10526 00:36:21.248 success 784, unsuccess 461, failed 0 00:36:21.248 05:56:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:21.248 05:56:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:21.249 EAL: No free 2048 kB hugepages reported on node 1 00:36:25.430 Initializing NVMe Controllers 00:36:25.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:25.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:25.430 Initialization complete. Launching workers. 00:36:25.430 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8536, failed: 0 00:36:25.430 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 7289 00:36:25.430 success 281, unsuccess 966, failed 0 00:36:25.430 05:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:25.430 05:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:25.430 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.958 Initializing NVMe Controllers 00:36:27.958 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:27.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:27.958 Initialization complete. Launching workers. 00:36:27.958 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31945, failed: 0 00:36:27.958 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2592, failed to submit 29353 00:36:27.958 success 517, unsuccess 2075, failed 0 00:36:27.958 05:56:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:27.958 05:56:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.958 05:56:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.958 05:56:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.958 05:56:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:27.958 05:56:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.958 05:56:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.330 05:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.330 05:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1802358 00:36:29.330 05:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1802358 ']' 00:36:29.330 05:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1802358 00:36:29.330 05:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:36:29.330 05:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:29.330 05:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1802358 00:36:29.330 05:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:29.330 05:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:29.330 05:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1802358' 00:36:29.330 killing process with pid 1802358 00:36:29.330 05:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1802358 00:36:29.330 05:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1802358 00:36:29.589 00:36:29.589 real 0m14.218s 00:36:29.589 user 0m53.741s 00:36:29.589 sys 0m2.721s 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.589 ************************************ 00:36:29.589 END TEST spdk_target_abort 00:36:29.589 ************************************ 00:36:29.589 05:56:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:29.589 05:56:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:29.589 05:56:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:29.589 05:56:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:29.589 ************************************ 00:36:29.589 START TEST kernel_target_abort 00:36:29.589 ************************************ 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:29.589 05:56:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:30.974 Waiting for block devices as requested 00:36:30.974 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:30.974 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:30.974 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:30.974 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:31.232 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:31.232 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:31.232 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:31.232 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:31.490 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:31.490 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:31.490 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:31.490 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:31.748 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:31.748 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:31.748 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:31.748 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:32.006 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:32.006 No valid GPT data, bailing 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:32.006 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:32.264 00:36:32.264 Discovery Log Number of Records 2, Generation counter 2 00:36:32.264 =====Discovery Log Entry 0====== 00:36:32.264 trtype: tcp 00:36:32.264 adrfam: ipv4 00:36:32.264 subtype: current discovery subsystem 00:36:32.264 treq: not specified, sq flow control disable supported 00:36:32.264 portid: 1 00:36:32.264 trsvcid: 4420 00:36:32.264 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:32.264 traddr: 10.0.0.1 00:36:32.264 eflags: none 00:36:32.264 sectype: none 00:36:32.264 =====Discovery Log Entry 1====== 00:36:32.264 trtype: tcp 00:36:32.264 adrfam: ipv4 00:36:32.264 subtype: nvme subsystem 00:36:32.264 treq: not specified, sq flow control disable supported 00:36:32.264 portid: 1 00:36:32.264 trsvcid: 4420 00:36:32.264 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:32.264 traddr: 10.0.0.1 00:36:32.264 eflags: none 00:36:32.264 sectype: none 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:32.264 05:56:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:32.264 EAL: No free 2048 kB hugepages reported on node 1 00:36:35.542 Initializing NVMe Controllers 00:36:35.542 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:35.542 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:35.542 Initialization complete. Launching workers. 00:36:35.542 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32810, failed: 0 00:36:35.542 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32810, failed to submit 0 00:36:35.542 success 0, unsuccess 32810, failed 0 00:36:35.542 05:56:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:35.542 05:56:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:35.542 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.820 Initializing NVMe Controllers 00:36:38.820 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:38.820 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:38.820 Initialization complete. Launching workers. 00:36:38.820 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64691, failed: 0 00:36:38.820 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16318, failed to submit 48373 00:36:38.820 success 0, unsuccess 16318, failed 0 00:36:38.820 05:56:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:38.820 05:56:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:38.820 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.411 Initializing NVMe Controllers 00:36:41.411 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:41.411 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:41.411 Initialization complete. Launching workers. 00:36:41.411 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63522, failed: 0 00:36:41.411 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15866, failed to submit 47656 00:36:41.411 success 0, unsuccess 15866, failed 0 00:36:41.411 05:56:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:41.411 05:56:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:41.411 05:56:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:41.411 05:56:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:41.669 05:56:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:41.669 05:56:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:41.669 05:56:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:41.669 05:56:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:41.669 05:56:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:41.669 05:56:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:42.601 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:42.601 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:42.601 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:42.601 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:42.601 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:42.601 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:42.602 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:42.602 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:42.602 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:42.860 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:42.860 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:42.860 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:42.860 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:42.860 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:42.860 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:42.860 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:43.793 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:43.793 00:36:43.793 real 0m14.242s 00:36:43.793 user 0m5.249s 00:36:43.793 sys 0m3.339s 00:36:43.793 05:56:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:43.793 05:56:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.793 ************************************ 00:36:43.793 END TEST kernel_target_abort 00:36:43.793 ************************************ 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:43.793 rmmod nvme_tcp 00:36:43.793 rmmod nvme_fabrics 00:36:43.793 rmmod nvme_keyring 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1802358 ']' 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1802358 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1802358 ']' 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1802358 00:36:43.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1802358) - No such process 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1802358 is not found' 00:36:43.793 Process with pid 1802358 is not found 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:43.793 05:56:37 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:45.168 Waiting for block devices as requested 00:36:45.168 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:45.168 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:45.168 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:45.426 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:45.426 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:45.426 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:45.426 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:45.684 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:45.684 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:45.684 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:45.684 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:45.942 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:45.942 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:45.942 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:45.942 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:46.200 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:46.200 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:46.200 05:56:39 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:46.200 05:56:39 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:46.200 05:56:39 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:46.200 05:56:39 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:46.200 05:56:39 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.200 05:56:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:46.200 05:56:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.730 05:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:48.730 00:36:48.730 real 0m37.800s 00:36:48.730 user 1m1.027s 00:36:48.730 sys 0m9.454s 00:36:48.730 05:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:48.730 05:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.730 ************************************ 00:36:48.730 END TEST nvmf_abort_qd_sizes 00:36:48.730 ************************************ 00:36:48.730 05:56:41 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:48.730 05:56:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:48.730 05:56:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:48.730 05:56:41 -- common/autotest_common.sh@10 -- # set +x 00:36:48.730 ************************************ 00:36:48.730 START TEST keyring_file 00:36:48.730 ************************************ 00:36:48.730 05:56:41 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:48.730 * Looking for test storage... 00:36:48.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:48.730 05:56:41 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:48.730 05:56:41 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:48.730 05:56:41 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:48.730 05:56:41 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:48.730 05:56:41 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:48.730 05:56:41 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.730 05:56:41 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.730 05:56:41 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.730 05:56:41 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:48.730 05:56:41 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:48.730 05:56:41 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:48.730 05:56:41 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:48.730 05:56:41 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:48.730 05:56:41 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:48.730 05:56:41 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:48.730 05:56:41 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:48.730 05:56:41 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:48.730 05:56:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:48.730 05:56:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:48.730 05:56:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:48.730 05:56:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:48.730 05:56:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:48.730 05:56:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vhGbXuUwVA 00:36:48.730 05:56:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:48.730 05:56:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:48.730 05:56:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vhGbXuUwVA 00:36:48.730 05:56:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vhGbXuUwVA 00:36:48.730 05:56:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vhGbXuUwVA 00:36:48.730 05:56:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:48.730 05:56:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:48.730 05:56:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:48.730 05:56:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:48.730 05:56:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:48.730 05:56:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:48.730 05:56:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rSRyRK8x8p 00:36:48.730 05:56:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:48.730 05:56:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:48.730 05:56:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:48.730 05:56:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:48.730 05:56:42 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:48.730 05:56:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:48.730 05:56:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:48.730 05:56:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rSRyRK8x8p 00:36:48.730 05:56:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rSRyRK8x8p 00:36:48.731 05:56:42 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.rSRyRK8x8p 00:36:48.731 05:56:42 keyring_file -- keyring/file.sh@30 -- # tgtpid=1808156 00:36:48.731 05:56:42 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:48.731 05:56:42 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1808156 00:36:48.731 05:56:42 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1808156 ']' 00:36:48.731 05:56:42 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.731 05:56:42 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:48.731 05:56:42 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.731 05:56:42 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:48.731 05:56:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:48.731 [2024-07-25 05:56:42.099748] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:36:48.731 [2024-07-25 05:56:42.099844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808156 ] 00:36:48.731 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.731 [2024-07-25 05:56:42.160597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.731 [2024-07-25 05:56:42.246389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:48.989 05:56:42 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:48.989 [2024-07-25 05:56:42.495508] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:48.989 null0 00:36:48.989 [2024-07-25 05:56:42.527607] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:48.989 [2024-07-25 05:56:42.528114] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:48.989 [2024-07-25 05:56:42.535607] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.989 05:56:42 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:48.989 [2024-07-25 05:56:42.547634] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:48.989 request: 00:36:48.989 { 00:36:48.989 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:48.989 "secure_channel": false, 00:36:48.989 "listen_address": { 00:36:48.989 "trtype": "tcp", 00:36:48.989 "traddr": "127.0.0.1", 00:36:48.989 "trsvcid": "4420" 00:36:48.989 }, 00:36:48.989 "method": "nvmf_subsystem_add_listener", 00:36:48.989 "req_id": 1 00:36:48.989 } 00:36:48.989 Got JSON-RPC error response 00:36:48.989 response: 00:36:48.989 { 00:36:48.989 "code": -32602, 00:36:48.989 "message": "Invalid parameters" 00:36:48.989 } 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:48.989 05:56:42 keyring_file -- keyring/file.sh@46 -- # bperfpid=1808174 00:36:48.989 05:56:42 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:48.989 05:56:42 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1808174 /var/tmp/bperf.sock 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1808174 ']' 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:48.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:48.989 05:56:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:48.989 [2024-07-25 05:56:42.594760] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:36:48.989 [2024-07-25 05:56:42.594824] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808174 ] 00:36:48.989 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.989 [2024-07-25 05:56:42.654413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:49.247 [2024-07-25 05:56:42.745622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:49.247 05:56:42 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:49.247 05:56:42 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:49.247 05:56:42 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vhGbXuUwVA 00:36:49.247 05:56:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vhGbXuUwVA 00:36:49.505 05:56:43 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rSRyRK8x8p 00:36:49.505 05:56:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rSRyRK8x8p 00:36:49.763 05:56:43 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:49.763 05:56:43 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:49.763 05:56:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.763 05:56:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:49.763 05:56:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.020 05:56:43 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.vhGbXuUwVA == \/\t\m\p\/\t\m\p\.\v\h\G\b\X\u\U\w\V\A ]] 00:36:50.020 05:56:43 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:50.020 05:56:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:50.021 05:56:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.021 05:56:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.021 05:56:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:50.278 05:56:43 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.rSRyRK8x8p == \/\t\m\p\/\t\m\p\.\r\S\R\y\R\K\8\x\8\p ]] 00:36:50.278 05:56:43 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:50.278 05:56:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:50.278 05:56:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.278 05:56:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.278 05:56:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.278 05:56:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.539 05:56:44 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:50.539 05:56:44 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:50.539 05:56:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:50.539 05:56:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.539 05:56:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.539 05:56:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.539 05:56:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:50.796 05:56:44 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:50.796 05:56:44 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.796 05:56:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.054 [2024-07-25 05:56:44.588913] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:51.054 nvme0n1 00:36:51.054 05:56:44 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:51.054 05:56:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:51.054 05:56:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:51.054 05:56:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:51.054 05:56:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:51.054 05:56:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.312 05:56:44 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:51.312 05:56:44 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:51.312 05:56:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:51.312 05:56:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:51.312 05:56:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:51.312 05:56:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.312 05:56:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:51.570 05:56:45 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:51.570 05:56:45 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:51.570 Running I/O for 1 seconds... 00:36:52.943 00:36:52.943 Latency(us) 00:36:52.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.943 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:52.943 nvme0n1 : 1.02 5109.66 19.96 0.00 0.00 24782.19 11068.30 36894.34 00:36:52.943 =================================================================================================================== 00:36:52.943 Total : 5109.66 19.96 0.00 0.00 24782.19 11068.30 36894.34 00:36:52.943 0 00:36:52.943 05:56:46 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:52.943 05:56:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:52.943 05:56:46 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:52.943 05:56:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:52.943 05:56:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.943 05:56:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.943 05:56:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.943 05:56:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:53.201 05:56:46 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:53.201 05:56:46 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:53.201 05:56:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:53.201 05:56:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.201 05:56:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.201 05:56:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.201 05:56:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:53.459 05:56:47 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:53.459 05:56:47 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:53.459 05:56:47 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:53.459 05:56:47 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:53.459 05:56:47 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:53.459 05:56:47 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:53.459 05:56:47 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:53.459 05:56:47 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:53.459 05:56:47 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:53.459 05:56:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:53.717 [2024-07-25 05:56:47.292050] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:53.717 [2024-07-25 05:56:47.292405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022710 (107): Transport endpoint is not connected 00:36:53.717 [2024-07-25 05:56:47.293398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022710 (9): Bad file descriptor 00:36:53.717 [2024-07-25 05:56:47.294397] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:53.717 [2024-07-25 05:56:47.294419] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:53.717 [2024-07-25 05:56:47.294433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:53.717 request: 00:36:53.717 { 00:36:53.717 "name": "nvme0", 00:36:53.717 "trtype": "tcp", 00:36:53.717 "traddr": "127.0.0.1", 00:36:53.717 "adrfam": "ipv4", 00:36:53.717 "trsvcid": "4420", 00:36:53.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.717 "prchk_reftag": false, 00:36:53.717 "prchk_guard": false, 00:36:53.717 "hdgst": false, 00:36:53.717 "ddgst": false, 00:36:53.717 "psk": "key1", 00:36:53.717 "method": "bdev_nvme_attach_controller", 00:36:53.717 "req_id": 1 00:36:53.717 } 00:36:53.717 Got JSON-RPC error response 00:36:53.717 response: 00:36:53.717 { 00:36:53.717 "code": -5, 00:36:53.717 "message": "Input/output error" 00:36:53.717 } 00:36:53.717 05:56:47 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:53.717 05:56:47 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:53.717 05:56:47 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:53.717 05:56:47 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:53.717 05:56:47 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:53.717 05:56:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:53.717 05:56:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.717 05:56:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.717 05:56:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:53.717 05:56:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.975 05:56:47 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:53.975 05:56:47 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:53.975 05:56:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:53.975 05:56:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.975 05:56:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.975 05:56:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.975 05:56:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:54.232 05:56:47 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:54.232 05:56:47 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:54.232 05:56:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:54.490 05:56:48 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:54.490 05:56:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:54.747 05:56:48 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:54.747 05:56:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.747 05:56:48 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:55.005 05:56:48 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:55.005 05:56:48 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.vhGbXuUwVA 00:36:55.005 05:56:48 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vhGbXuUwVA 00:36:55.005 05:56:48 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:55.005 05:56:48 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vhGbXuUwVA 00:36:55.005 05:56:48 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:55.005 05:56:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:55.005 05:56:48 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:55.005 05:56:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:55.005 05:56:48 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vhGbXuUwVA 00:36:55.005 05:56:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vhGbXuUwVA 00:36:55.263 [2024-07-25 05:56:48.775228] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vhGbXuUwVA': 0100660 00:36:55.263 [2024-07-25 05:56:48.775300] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:55.263 request: 00:36:55.263 { 00:36:55.263 "name": "key0", 00:36:55.263 "path": "/tmp/tmp.vhGbXuUwVA", 00:36:55.263 "method": "keyring_file_add_key", 00:36:55.263 "req_id": 1 00:36:55.263 } 00:36:55.263 Got JSON-RPC error response 00:36:55.263 response: 00:36:55.263 { 00:36:55.263 "code": -1, 00:36:55.263 "message": "Operation not permitted" 00:36:55.263 } 00:36:55.263 05:56:48 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:55.263 05:56:48 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:55.263 05:56:48 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:55.263 05:56:48 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:55.263 05:56:48 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.vhGbXuUwVA 00:36:55.263 05:56:48 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vhGbXuUwVA 00:36:55.263 05:56:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vhGbXuUwVA 00:36:55.522 05:56:49 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.vhGbXuUwVA 00:36:55.522 05:56:49 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:55.522 05:56:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:55.522 05:56:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.522 05:56:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.522 05:56:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:55.522 05:56:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.817 05:56:49 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:55.817 05:56:49 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:55.817 05:56:49 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:55.817 05:56:49 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:55.817 05:56:49 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:55.817 05:56:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:55.817 05:56:49 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:55.817 05:56:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:55.817 05:56:49 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:55.817 05:56:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:56.075 [2024-07-25 05:56:49.561413] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vhGbXuUwVA': No such file or directory 00:36:56.075 [2024-07-25 05:56:49.561449] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:56.075 [2024-07-25 05:56:49.561477] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:56.075 [2024-07-25 05:56:49.561489] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:56.075 [2024-07-25 05:56:49.561500] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:56.075 request: 00:36:56.075 { 00:36:56.075 "name": "nvme0", 00:36:56.075 "trtype": "tcp", 00:36:56.075 "traddr": "127.0.0.1", 00:36:56.075 "adrfam": "ipv4", 00:36:56.075 "trsvcid": "4420", 00:36:56.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:56.075 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:56.075 "prchk_reftag": false, 00:36:56.075 "prchk_guard": false, 00:36:56.075 "hdgst": false, 00:36:56.075 "ddgst": false, 00:36:56.075 "psk": "key0", 00:36:56.075 "method": "bdev_nvme_attach_controller", 00:36:56.075 "req_id": 1 00:36:56.075 } 00:36:56.075 Got JSON-RPC error response 00:36:56.075 response: 00:36:56.075 { 00:36:56.075 "code": -19, 00:36:56.075 "message": "No such device" 00:36:56.075 } 00:36:56.075 05:56:49 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:56.075 05:56:49 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:56.075 05:56:49 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:56.075 05:56:49 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:56.075 05:56:49 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:56.075 05:56:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:56.333 05:56:49 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:56.333 05:56:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:56.333 05:56:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:56.333 05:56:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:56.333 05:56:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:56.333 05:56:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:56.333 05:56:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.RTuMlH5htf 00:36:56.333 05:56:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:56.334 05:56:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:56.334 05:56:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:56.334 05:56:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:56.334 05:56:49 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:56.334 05:56:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:56.334 05:56:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:56.334 05:56:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RTuMlH5htf 00:36:56.334 05:56:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.RTuMlH5htf 00:36:56.334 05:56:49 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.RTuMlH5htf 00:36:56.334 05:56:49 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RTuMlH5htf 00:36:56.334 05:56:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RTuMlH5htf 00:36:56.591 05:56:50 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:56.591 05:56:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:56.849 nvme0n1 00:36:56.849 05:56:50 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:56.849 05:56:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:56.849 05:56:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:56.849 05:56:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.849 05:56:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.849 05:56:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:57.107 05:56:50 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:57.107 05:56:50 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:57.107 05:56:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:57.365 05:56:50 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:57.365 05:56:50 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:57.365 05:56:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:57.365 05:56:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.365 05:56:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:57.622 05:56:51 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:57.622 05:56:51 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:57.622 05:56:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:57.622 05:56:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:57.622 05:56:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:57.622 05:56:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.622 05:56:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:57.880 05:56:51 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:57.880 05:56:51 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:57.880 05:56:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:58.138 05:56:51 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:58.138 05:56:51 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:58.138 05:56:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:58.396 05:56:51 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:58.396 05:56:51 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RTuMlH5htf 00:36:58.396 05:56:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RTuMlH5htf 00:36:58.653 05:56:52 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rSRyRK8x8p 00:36:58.654 05:56:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rSRyRK8x8p 00:36:58.911 05:56:52 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.911 05:56:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:59.169 nvme0n1 00:36:59.169 05:56:52 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:59.169 05:56:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:59.427 05:56:53 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:59.427 "subsystems": [ 00:36:59.427 { 00:36:59.427 "subsystem": "keyring", 00:36:59.427 "config": [ 00:36:59.427 { 00:36:59.427 "method": "keyring_file_add_key", 00:36:59.427 "params": { 00:36:59.427 "name": "key0", 00:36:59.427 "path": "/tmp/tmp.RTuMlH5htf" 00:36:59.427 } 00:36:59.427 }, 00:36:59.427 { 00:36:59.427 "method": "keyring_file_add_key", 00:36:59.427 "params": { 00:36:59.427 "name": "key1", 00:36:59.427 "path": "/tmp/tmp.rSRyRK8x8p" 00:36:59.427 } 00:36:59.427 } 00:36:59.427 ] 00:36:59.427 }, 00:36:59.427 { 00:36:59.427 "subsystem": "iobuf", 00:36:59.427 "config": [ 00:36:59.427 { 00:36:59.427 "method": "iobuf_set_options", 00:36:59.427 "params": { 00:36:59.427 "small_pool_count": 8192, 00:36:59.427 "large_pool_count": 1024, 00:36:59.427 "small_bufsize": 8192, 00:36:59.427 "large_bufsize": 135168 00:36:59.427 } 00:36:59.427 } 00:36:59.427 ] 00:36:59.427 }, 00:36:59.427 { 00:36:59.427 "subsystem": "sock", 00:36:59.427 "config": [ 00:36:59.427 { 00:36:59.427 "method": "sock_set_default_impl", 00:36:59.427 "params": { 00:36:59.427 "impl_name": "posix" 00:36:59.427 } 00:36:59.427 }, 00:36:59.427 { 00:36:59.427 "method": "sock_impl_set_options", 00:36:59.427 "params": { 00:36:59.427 "impl_name": "ssl", 00:36:59.427 "recv_buf_size": 4096, 00:36:59.427 "send_buf_size": 4096, 00:36:59.427 "enable_recv_pipe": true, 00:36:59.427 "enable_quickack": false, 00:36:59.427 "enable_placement_id": 0, 00:36:59.427 "enable_zerocopy_send_server": true, 00:36:59.427 "enable_zerocopy_send_client": false, 00:36:59.427 "zerocopy_threshold": 0, 00:36:59.427 "tls_version": 0, 00:36:59.427 "enable_ktls": false 00:36:59.427 } 00:36:59.427 }, 00:36:59.427 { 00:36:59.427 "method": "sock_impl_set_options", 00:36:59.427 "params": { 00:36:59.427 "impl_name": "posix", 00:36:59.427 "recv_buf_size": 2097152, 00:36:59.427 "send_buf_size": 2097152, 00:36:59.427 "enable_recv_pipe": true, 00:36:59.427 "enable_quickack": false, 00:36:59.427 "enable_placement_id": 0, 00:36:59.427 "enable_zerocopy_send_server": true, 00:36:59.427 "enable_zerocopy_send_client": false, 00:36:59.427 "zerocopy_threshold": 0, 00:36:59.427 "tls_version": 0, 00:36:59.427 "enable_ktls": false 00:36:59.427 } 00:36:59.427 } 00:36:59.427 ] 00:36:59.427 }, 00:36:59.427 { 00:36:59.427 "subsystem": "vmd", 00:36:59.427 "config": [] 00:36:59.427 }, 00:36:59.427 { 00:36:59.427 "subsystem": "accel", 00:36:59.427 "config": [ 00:36:59.427 { 00:36:59.427 "method": "accel_set_options", 00:36:59.427 "params": { 00:36:59.427 "small_cache_size": 128, 00:36:59.427 "large_cache_size": 16, 00:36:59.427 "task_count": 2048, 00:36:59.427 "sequence_count": 2048, 00:36:59.427 "buf_count": 2048 00:36:59.427 } 00:36:59.427 } 00:36:59.427 ] 00:36:59.427 }, 00:36:59.427 { 00:36:59.427 "subsystem": "bdev", 00:36:59.427 "config": [ 00:36:59.427 { 00:36:59.427 "method": "bdev_set_options", 00:36:59.427 "params": { 00:36:59.427 "bdev_io_pool_size": 65535, 00:36:59.427 "bdev_io_cache_size": 256, 00:36:59.427 "bdev_auto_examine": true, 00:36:59.427 "iobuf_small_cache_size": 128, 00:36:59.427 "iobuf_large_cache_size": 16 00:36:59.427 } 00:36:59.427 }, 00:36:59.427 { 00:36:59.427 "method": "bdev_raid_set_options", 00:36:59.427 "params": { 00:36:59.427 "process_window_size_kb": 1024, 00:36:59.427 "process_max_bandwidth_mb_sec": 0 00:36:59.427 } 00:36:59.427 }, 00:36:59.427 { 00:36:59.427 "method": "bdev_iscsi_set_options", 00:36:59.427 "params": { 00:36:59.427 "timeout_sec": 30 00:36:59.427 } 00:36:59.427 }, 00:36:59.427 { 00:36:59.427 "method": "bdev_nvme_set_options", 00:36:59.427 "params": { 00:36:59.427 "action_on_timeout": "none", 00:36:59.427 "timeout_us": 0, 00:36:59.427 "timeout_admin_us": 0, 00:36:59.427 "keep_alive_timeout_ms": 10000, 00:36:59.427 "arbitration_burst": 0, 00:36:59.427 "low_priority_weight": 0, 00:36:59.427 "medium_priority_weight": 0, 00:36:59.427 "high_priority_weight": 0, 00:36:59.427 "nvme_adminq_poll_period_us": 10000, 00:36:59.427 "nvme_ioq_poll_period_us": 0, 00:36:59.427 "io_queue_requests": 512, 00:36:59.427 "delay_cmd_submit": true, 00:36:59.427 "transport_retry_count": 4, 00:36:59.427 "bdev_retry_count": 3, 00:36:59.427 "transport_ack_timeout": 0, 00:36:59.427 "ctrlr_loss_timeout_sec": 0, 00:36:59.427 "reconnect_delay_sec": 0, 00:36:59.427 "fast_io_fail_timeout_sec": 0, 00:36:59.427 "disable_auto_failback": false, 00:36:59.427 "generate_uuids": false, 00:36:59.427 "transport_tos": 0, 00:36:59.427 "nvme_error_stat": false, 00:36:59.427 "rdma_srq_size": 0, 00:36:59.427 "io_path_stat": false, 00:36:59.427 "allow_accel_sequence": false, 00:36:59.427 "rdma_max_cq_size": 0, 00:36:59.427 "rdma_cm_event_timeout_ms": 0, 00:36:59.427 "dhchap_digests": [ 00:36:59.427 "sha256", 00:36:59.427 "sha384", 00:36:59.427 "sha512" 00:36:59.427 ], 00:36:59.427 "dhchap_dhgroups": [ 00:36:59.427 "null", 00:36:59.427 "ffdhe2048", 00:36:59.427 "ffdhe3072", 00:36:59.427 "ffdhe4096", 00:36:59.427 "ffdhe6144", 00:36:59.427 "ffdhe8192" 00:36:59.427 ] 00:36:59.427 } 00:36:59.427 }, 00:36:59.427 { 00:36:59.427 "method": "bdev_nvme_attach_controller", 00:36:59.427 "params": { 00:36:59.427 "name": "nvme0", 00:36:59.427 "trtype": "TCP", 00:36:59.427 "adrfam": "IPv4", 00:36:59.427 "traddr": "127.0.0.1", 00:36:59.427 "trsvcid": "4420", 00:36:59.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:59.427 "prchk_reftag": false, 00:36:59.427 "prchk_guard": false, 00:36:59.427 "ctrlr_loss_timeout_sec": 0, 00:36:59.427 "reconnect_delay_sec": 0, 00:36:59.427 "fast_io_fail_timeout_sec": 0, 00:36:59.427 "psk": "key0", 00:36:59.427 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:59.427 "hdgst": false, 00:36:59.427 "ddgst": false 00:36:59.427 } 00:36:59.427 }, 00:36:59.427 { 00:36:59.427 "method": "bdev_nvme_set_hotplug", 00:36:59.427 "params": { 00:36:59.427 "period_us": 100000, 00:36:59.427 "enable": false 00:36:59.428 } 00:36:59.428 }, 00:36:59.428 { 00:36:59.428 "method": "bdev_wait_for_examine" 00:36:59.428 } 00:36:59.428 ] 00:36:59.428 }, 00:36:59.428 { 00:36:59.428 "subsystem": "nbd", 00:36:59.428 "config": [] 00:36:59.428 } 00:36:59.428 ] 00:36:59.428 }' 00:36:59.428 05:56:53 keyring_file -- keyring/file.sh@114 -- # killprocess 1808174 00:36:59.428 05:56:53 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1808174 ']' 00:36:59.428 05:56:53 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1808174 00:36:59.428 05:56:53 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:59.428 05:56:53 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:59.428 05:56:53 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1808174 00:36:59.428 05:56:53 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:59.428 05:56:53 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:59.428 05:56:53 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1808174' 00:36:59.428 killing process with pid 1808174 00:36:59.428 05:56:53 keyring_file -- common/autotest_common.sh@969 -- # kill 1808174 00:36:59.428 Received shutdown signal, test time was about 1.000000 seconds 00:36:59.428 00:36:59.428 Latency(us) 00:36:59.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:59.428 =================================================================================================================== 00:36:59.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:59.428 05:56:53 keyring_file -- common/autotest_common.sh@974 -- # wait 1808174 00:36:59.686 05:56:53 keyring_file -- keyring/file.sh@117 -- # bperfpid=1809522 00:36:59.686 05:56:53 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1809522 /var/tmp/bperf.sock 00:36:59.686 05:56:53 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1809522 ']' 00:36:59.686 05:56:53 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:59.686 05:56:53 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:59.686 05:56:53 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:59.686 05:56:53 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:59.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:59.686 05:56:53 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:59.686 05:56:53 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:59.686 "subsystems": [ 00:36:59.686 { 00:36:59.686 "subsystem": "keyring", 00:36:59.686 "config": [ 00:36:59.686 { 00:36:59.686 "method": "keyring_file_add_key", 00:36:59.686 "params": { 00:36:59.686 "name": "key0", 00:36:59.686 "path": "/tmp/tmp.RTuMlH5htf" 00:36:59.686 } 00:36:59.686 }, 00:36:59.686 { 00:36:59.686 "method": "keyring_file_add_key", 00:36:59.686 "params": { 00:36:59.686 "name": "key1", 00:36:59.686 "path": "/tmp/tmp.rSRyRK8x8p" 00:36:59.686 } 00:36:59.686 } 00:36:59.686 ] 00:36:59.686 }, 00:36:59.686 { 00:36:59.686 "subsystem": "iobuf", 00:36:59.686 "config": [ 00:36:59.686 { 00:36:59.686 "method": "iobuf_set_options", 00:36:59.686 "params": { 00:36:59.686 "small_pool_count": 8192, 00:36:59.686 "large_pool_count": 1024, 00:36:59.686 "small_bufsize": 8192, 00:36:59.686 "large_bufsize": 135168 00:36:59.687 } 00:36:59.687 } 00:36:59.687 ] 00:36:59.687 }, 00:36:59.687 { 00:36:59.687 "subsystem": "sock", 00:36:59.687 "config": [ 00:36:59.687 { 00:36:59.687 "method": "sock_set_default_impl", 00:36:59.687 "params": { 00:36:59.687 "impl_name": "posix" 00:36:59.687 } 00:36:59.687 }, 00:36:59.687 { 00:36:59.687 "method": "sock_impl_set_options", 00:36:59.687 "params": { 00:36:59.687 "impl_name": "ssl", 00:36:59.687 "recv_buf_size": 4096, 00:36:59.687 "send_buf_size": 4096, 00:36:59.687 "enable_recv_pipe": true, 00:36:59.687 "enable_quickack": false, 00:36:59.687 "enable_placement_id": 0, 00:36:59.687 "enable_zerocopy_send_server": true, 00:36:59.687 "enable_zerocopy_send_client": false, 00:36:59.687 "zerocopy_threshold": 0, 00:36:59.687 "tls_version": 0, 00:36:59.687 "enable_ktls": false 00:36:59.687 } 00:36:59.687 }, 00:36:59.687 { 00:36:59.687 "method": "sock_impl_set_options", 00:36:59.687 "params": { 00:36:59.687 "impl_name": "posix", 00:36:59.687 "recv_buf_size": 2097152, 00:36:59.687 "send_buf_size": 2097152, 00:36:59.687 "enable_recv_pipe": true, 00:36:59.687 "enable_quickack": false, 00:36:59.687 "enable_placement_id": 0, 00:36:59.687 "enable_zerocopy_send_server": true, 00:36:59.687 "enable_zerocopy_send_client": false, 00:36:59.687 "zerocopy_threshold": 0, 00:36:59.687 "tls_version": 0, 00:36:59.687 "enable_ktls": false 00:36:59.687 } 00:36:59.687 } 00:36:59.687 ] 00:36:59.687 }, 00:36:59.687 { 00:36:59.687 "subsystem": "vmd", 00:36:59.687 "config": [] 00:36:59.687 }, 00:36:59.687 { 00:36:59.687 "subsystem": "accel", 00:36:59.687 "config": [ 00:36:59.687 { 00:36:59.687 "method": "accel_set_options", 00:36:59.687 "params": { 00:36:59.687 "small_cache_size": 128, 00:36:59.687 "large_cache_size": 16, 00:36:59.687 "task_count": 2048, 00:36:59.687 "sequence_count": 2048, 00:36:59.687 "buf_count": 2048 00:36:59.687 } 00:36:59.687 } 00:36:59.687 ] 00:36:59.687 }, 00:36:59.687 { 00:36:59.687 "subsystem": "bdev", 00:36:59.687 "config": [ 00:36:59.687 { 00:36:59.687 "method": "bdev_set_options", 00:36:59.687 "params": { 00:36:59.687 "bdev_io_pool_size": 65535, 00:36:59.687 "bdev_io_cache_size": 256, 00:36:59.687 "bdev_auto_examine": true, 00:36:59.687 "iobuf_small_cache_size": 128, 00:36:59.687 "iobuf_large_cache_size": 16 00:36:59.687 } 00:36:59.687 }, 00:36:59.687 { 00:36:59.687 "method": "bdev_raid_set_options", 00:36:59.687 "params": { 00:36:59.687 "process_window_size_kb": 1024, 00:36:59.687 "process_max_bandwidth_mb_sec": 0 00:36:59.687 } 00:36:59.687 }, 00:36:59.687 { 00:36:59.687 "method": "bdev_iscsi_set_options", 00:36:59.687 "params": { 00:36:59.687 "timeout_sec": 30 00:36:59.687 } 00:36:59.687 }, 00:36:59.687 { 00:36:59.687 "method": "bdev_nvme_set_options", 00:36:59.687 "params": { 00:36:59.687 "action_on_timeout": "none", 00:36:59.687 "timeout_us": 0, 00:36:59.687 "timeout_admin_us": 0, 00:36:59.687 "keep_alive_timeout_ms": 10000, 00:36:59.687 "arbitration_burst": 0, 00:36:59.687 "low_priority_weight": 0, 00:36:59.687 "medium_priority_weight": 0, 00:36:59.687 "high_priority_weight": 0, 00:36:59.687 "nvme_adminq_poll_period_us": 10000, 00:36:59.687 "nvme_ioq_poll_period_us": 0, 00:36:59.687 "io_queue_requests": 512, 00:36:59.687 "delay_cmd_submit": true, 00:36:59.687 "transport_retry_count": 4, 00:36:59.687 "bdev_retry_count": 3, 00:36:59.687 "transport_ack_timeout": 0, 00:36:59.687 "ctrlr_loss_timeout_sec": 0, 00:36:59.687 "reconnect_delay_sec": 0, 00:36:59.687 "fast_io_fail_timeout_sec": 0, 00:36:59.687 "disable_auto_failback": false, 00:36:59.687 "generate_uuids": false, 00:36:59.687 "transport_tos": 0, 00:36:59.687 "nvme_error_stat": false, 00:36:59.687 "rdma_srq_size": 0, 00:36:59.687 "io_path_stat": false, 00:36:59.687 "allow_accel_sequence": false, 00:36:59.687 "rdma_max_cq_size": 0, 00:36:59.687 "rdma_cm_event_timeout_ms": 0, 00:36:59.687 "dhchap_digests": [ 00:36:59.687 "sha256", 00:36:59.687 "sha384", 00:36:59.687 "sha512" 00:36:59.687 ], 00:36:59.687 "dhchap_dhgroups": [ 00:36:59.687 "null", 00:36:59.687 "ffdhe2048", 00:36:59.687 "ffdhe3072", 00:36:59.687 "ffdhe4096", 00:36:59.687 "ffdhe6144", 00:36:59.687 "ffdhe8192" 00:36:59.687 ] 00:36:59.687 } 00:36:59.687 }, 00:36:59.687 { 00:36:59.687 "method": "bdev_nvme_attach_controller", 00:36:59.687 "params": { 00:36:59.687 "name": "nvme0", 00:36:59.687 "trtype": "TCP", 00:36:59.687 "adrfam": "IPv4", 00:36:59.687 "traddr": "127.0.0.1", 00:36:59.687 "trsvcid": "4420", 00:36:59.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:59.687 "prchk_reftag": false, 00:36:59.687 "prchk_guard": false, 00:36:59.687 "ctrlr_loss_timeout_sec": 0, 00:36:59.687 "reconnect_delay_sec": 0, 00:36:59.687 "fast_io_fail_timeout_sec": 0, 00:36:59.687 "psk": "key0", 00:36:59.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:59.687 "hdgst": false, 00:36:59.687 "ddgst": false 00:36:59.687 } 00:36:59.687 }, 00:36:59.687 { 00:36:59.687 "method": "bdev_nvme_set_hotplug", 00:36:59.687 "params": { 00:36:59.687 "period_us": 100000, 00:36:59.687 "enable": false 00:36:59.687 } 00:36:59.687 }, 00:36:59.687 { 00:36:59.687 "method": "bdev_wait_for_examine" 00:36:59.687 } 00:36:59.687 ] 00:36:59.687 }, 00:36:59.687 { 00:36:59.687 "subsystem": "nbd", 00:36:59.687 "config": [] 00:36:59.687 } 00:36:59.687 ] 00:36:59.687 }' 00:36:59.687 05:56:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:59.687 [2024-07-25 05:56:53.334941] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:36:59.687 [2024-07-25 05:56:53.335026] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1809522 ] 00:36:59.687 EAL: No free 2048 kB hugepages reported on node 1 00:36:59.945 [2024-07-25 05:56:53.401236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:59.945 [2024-07-25 05:56:53.492535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.201 [2024-07-25 05:56:53.679657] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:00.766 05:56:54 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:00.766 05:56:54 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:00.766 05:56:54 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:00.766 05:56:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.766 05:56:54 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:01.024 05:56:54 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:01.024 05:56:54 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:01.024 05:56:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:01.024 05:56:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:01.024 05:56:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:01.024 05:56:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:01.024 05:56:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:01.281 05:56:54 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:01.281 05:56:54 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:01.281 05:56:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:01.281 05:56:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:01.281 05:56:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:01.281 05:56:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:01.281 05:56:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:01.538 05:56:55 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:01.538 05:56:55 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:01.538 05:56:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:01.538 05:56:55 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:01.796 05:56:55 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:01.796 05:56:55 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:01.796 05:56:55 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.RTuMlH5htf /tmp/tmp.rSRyRK8x8p 00:37:01.796 05:56:55 keyring_file -- keyring/file.sh@20 -- # killprocess 1809522 00:37:01.796 05:56:55 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1809522 ']' 00:37:01.796 05:56:55 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1809522 00:37:01.796 05:56:55 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:01.796 05:56:55 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:01.796 05:56:55 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1809522 00:37:01.796 05:56:55 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:01.796 05:56:55 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:01.796 05:56:55 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1809522' 00:37:01.796 killing process with pid 1809522 00:37:01.796 05:56:55 keyring_file -- common/autotest_common.sh@969 -- # kill 1809522 00:37:01.796 Received shutdown signal, test time was about 1.000000 seconds 00:37:01.796 00:37:01.796 Latency(us) 00:37:01.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.796 =================================================================================================================== 00:37:01.796 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:01.796 05:56:55 keyring_file -- common/autotest_common.sh@974 -- # wait 1809522 00:37:02.052 05:56:55 keyring_file -- keyring/file.sh@21 -- # killprocess 1808156 00:37:02.053 05:56:55 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1808156 ']' 00:37:02.053 05:56:55 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1808156 00:37:02.053 05:56:55 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:02.053 05:56:55 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:02.053 05:56:55 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1808156 00:37:02.053 05:56:55 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:02.053 05:56:55 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:02.053 05:56:55 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1808156' 00:37:02.053 killing process with pid 1808156 00:37:02.053 05:56:55 keyring_file -- common/autotest_common.sh@969 -- # kill 1808156 00:37:02.053 [2024-07-25 05:56:55.562116] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:02.053 05:56:55 keyring_file -- common/autotest_common.sh@974 -- # wait 1808156 00:37:02.310 00:37:02.310 real 0m14.067s 00:37:02.310 user 0m34.954s 00:37:02.310 sys 0m3.232s 00:37:02.310 05:56:55 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:02.310 05:56:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:02.310 ************************************ 00:37:02.310 END TEST keyring_file 00:37:02.310 ************************************ 00:37:02.310 05:56:55 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:37:02.310 05:56:55 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:02.310 05:56:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:02.310 05:56:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:02.310 05:56:55 -- common/autotest_common.sh@10 -- # set +x 00:37:02.568 ************************************ 00:37:02.568 START TEST keyring_linux 00:37:02.568 ************************************ 00:37:02.568 05:56:56 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:02.568 * Looking for test storage... 00:37:02.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:02.568 05:56:56 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:02.568 05:56:56 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:02.568 05:56:56 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:02.568 05:56:56 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:02.568 05:56:56 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:02.568 05:56:56 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:02.568 05:56:56 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:02.568 05:56:56 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:02.568 05:56:56 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:02.569 05:56:56 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:02.569 05:56:56 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:02.569 05:56:56 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:02.569 05:56:56 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.569 05:56:56 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.569 05:56:56 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.569 05:56:56 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:02.569 05:56:56 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:02.569 05:56:56 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:02.569 05:56:56 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:02.569 05:56:56 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:02.569 05:56:56 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:02.569 05:56:56 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:02.569 05:56:56 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:02.569 /tmp/:spdk-test:key0 00:37:02.569 05:56:56 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:02.569 05:56:56 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:02.569 05:56:56 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:02.569 /tmp/:spdk-test:key1 00:37:02.569 05:56:56 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1809987 00:37:02.569 05:56:56 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:02.569 05:56:56 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1809987 00:37:02.569 05:56:56 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1809987 ']' 00:37:02.569 05:56:56 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:02.569 05:56:56 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:02.569 05:56:56 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:02.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:02.569 05:56:56 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:02.569 05:56:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:02.569 [2024-07-25 05:56:56.230396] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:37:02.569 [2024-07-25 05:56:56.230495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1809987 ] 00:37:02.569 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.828 [2024-07-25 05:56:56.289662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.828 [2024-07-25 05:56:56.379536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.086 05:56:56 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:03.086 05:56:56 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:03.086 05:56:56 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:03.086 05:56:56 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.086 05:56:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:03.086 [2024-07-25 05:56:56.642920] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:03.086 null0 00:37:03.086 [2024-07-25 05:56:56.674975] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:03.086 [2024-07-25 05:56:56.675452] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:03.086 05:56:56 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.086 05:56:56 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:03.086 750924475 00:37:03.086 05:56:56 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:03.086 862778385 00:37:03.086 05:56:56 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1810016 00:37:03.086 05:56:56 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:03.086 05:56:56 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1810016 /var/tmp/bperf.sock 00:37:03.086 05:56:56 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1810016 ']' 00:37:03.086 05:56:56 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:03.086 05:56:56 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:03.086 05:56:56 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:03.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:03.086 05:56:56 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:03.086 05:56:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:03.086 [2024-07-25 05:56:56.740315] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 22.11.4 initialization... 00:37:03.086 [2024-07-25 05:56:56.740399] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810016 ] 00:37:03.086 EAL: No free 2048 kB hugepages reported on node 1 00:37:03.344 [2024-07-25 05:56:56.804352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.344 [2024-07-25 05:56:56.890919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:03.344 05:56:56 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:03.344 05:56:56 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:03.344 05:56:56 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:03.344 05:56:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:03.601 05:56:57 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:03.601 05:56:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:03.860 05:56:57 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:03.860 05:56:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:04.118 [2024-07-25 05:56:57.755415] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:04.376 nvme0n1 00:37:04.376 05:56:57 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:04.376 05:56:57 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:04.376 05:56:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:04.376 05:56:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:04.376 05:56:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.376 05:56:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:04.634 05:56:58 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:04.634 05:56:58 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:04.634 05:56:58 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:04.634 05:56:58 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:04.634 05:56:58 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:04.634 05:56:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.634 05:56:58 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:04.892 05:56:58 keyring_linux -- keyring/linux.sh@25 -- # sn=750924475 00:37:04.892 05:56:58 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:04.892 05:56:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:04.892 05:56:58 keyring_linux -- keyring/linux.sh@26 -- # [[ 750924475 == \7\5\0\9\2\4\4\7\5 ]] 00:37:04.892 05:56:58 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 750924475 00:37:04.892 05:56:58 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:04.892 05:56:58 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:04.892 Running I/O for 1 seconds... 00:37:05.825 00:37:05.825 Latency(us) 00:37:05.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:05.825 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:05.825 nvme0n1 : 1.02 5078.99 19.84 0.00 0.00 24999.45 11650.84 38447.79 00:37:05.825 =================================================================================================================== 00:37:05.825 Total : 5078.99 19.84 0.00 0.00 24999.45 11650.84 38447.79 00:37:05.825 0 00:37:05.825 05:56:59 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:05.825 05:56:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:06.082 05:56:59 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:06.082 05:56:59 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:06.082 05:56:59 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:06.083 05:56:59 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:06.083 05:56:59 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:06.083 05:56:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.340 05:56:59 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:06.341 05:56:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:06.341 05:56:59 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:06.341 05:56:59 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:06.341 05:56:59 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:37:06.341 05:56:59 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:06.341 05:56:59 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:06.341 05:56:59 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:06.341 05:56:59 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:06.341 05:56:59 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:06.341 05:56:59 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:06.341 05:56:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:06.599 [2024-07-25 05:57:00.248368] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:06.599 [2024-07-25 05:57:00.249177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdd7f0 (107): Transport endpoint is not connected 00:37:06.599 [2024-07-25 05:57:00.250166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdd7f0 (9): Bad file descriptor 00:37:06.599 [2024-07-25 05:57:00.251164] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:06.599 [2024-07-25 05:57:00.251188] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:06.599 [2024-07-25 05:57:00.251213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:06.599 request: 00:37:06.599 { 00:37:06.599 "name": "nvme0", 00:37:06.599 "trtype": "tcp", 00:37:06.599 "traddr": "127.0.0.1", 00:37:06.599 "adrfam": "ipv4", 00:37:06.599 "trsvcid": "4420", 00:37:06.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.599 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:06.599 "prchk_reftag": false, 00:37:06.599 "prchk_guard": false, 00:37:06.599 "hdgst": false, 00:37:06.599 "ddgst": false, 00:37:06.599 "psk": ":spdk-test:key1", 00:37:06.599 "method": "bdev_nvme_attach_controller", 00:37:06.599 "req_id": 1 00:37:06.599 } 00:37:06.599 Got JSON-RPC error response 00:37:06.599 response: 00:37:06.599 { 00:37:06.599 "code": -5, 00:37:06.599 "message": "Input/output error" 00:37:06.599 } 00:37:06.600 05:57:00 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:37:06.600 05:57:00 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:06.600 05:57:00 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:06.600 05:57:00 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@33 -- # sn=750924475 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 750924475 00:37:06.600 1 links removed 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@33 -- # sn=862778385 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 862778385 00:37:06.600 1 links removed 00:37:06.600 05:57:00 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1810016 00:37:06.600 05:57:00 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1810016 ']' 00:37:06.600 05:57:00 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1810016 00:37:06.600 05:57:00 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:06.600 05:57:00 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:06.600 05:57:00 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1810016 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1810016' 00:37:06.858 killing process with pid 1810016 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@969 -- # kill 1810016 00:37:06.858 Received shutdown signal, test time was about 1.000000 seconds 00:37:06.858 00:37:06.858 Latency(us) 00:37:06.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:06.858 =================================================================================================================== 00:37:06.858 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@974 -- # wait 1810016 00:37:06.858 05:57:00 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1809987 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1809987 ']' 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1809987 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1809987 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1809987' 00:37:06.858 killing process with pid 1809987 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@969 -- # kill 1809987 00:37:06.858 05:57:00 keyring_linux -- common/autotest_common.sh@974 -- # wait 1809987 00:37:07.423 00:37:07.423 real 0m4.933s 00:37:07.423 user 0m9.239s 00:37:07.423 sys 0m1.540s 00:37:07.423 05:57:00 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:07.423 05:57:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:07.423 ************************************ 00:37:07.423 END TEST keyring_linux 00:37:07.423 ************************************ 00:37:07.423 05:57:00 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:07.423 05:57:00 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:07.423 05:57:00 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:37:07.423 05:57:00 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:37:07.423 05:57:00 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:37:07.423 05:57:00 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:07.423 05:57:00 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:07.423 05:57:00 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:07.423 05:57:00 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:37:07.423 05:57:00 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:07.423 05:57:00 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:37:07.423 05:57:00 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:07.423 05:57:00 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:07.423 05:57:00 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:07.423 05:57:00 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:37:07.423 05:57:00 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:37:07.423 05:57:00 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:37:07.424 05:57:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:07.424 05:57:00 -- common/autotest_common.sh@10 -- # set +x 00:37:07.424 05:57:00 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:37:07.424 05:57:00 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:07.424 05:57:00 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:07.424 05:57:00 -- common/autotest_common.sh@10 -- # set +x 00:37:09.347 INFO: APP EXITING 00:37:09.347 INFO: killing all VMs 00:37:09.347 INFO: killing vhost app 00:37:09.347 INFO: EXIT DONE 00:37:10.280 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:10.280 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:10.280 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:10.280 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:10.280 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:10.280 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:10.280 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:10.280 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:10.280 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:10.280 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:10.280 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:10.280 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:10.280 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:10.280 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:10.280 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:10.280 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:10.537 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:11.909 Cleaning 00:37:11.909 Removing: /var/run/dpdk/spdk0/config 00:37:11.909 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:11.909 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:11.909 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:11.909 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:11.909 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:11.909 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:11.910 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:11.910 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:11.910 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:11.910 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:11.910 Removing: /var/run/dpdk/spdk1/config 00:37:11.910 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:11.910 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:11.910 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:11.910 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:11.910 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:11.910 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:11.910 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:11.910 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:11.910 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:11.910 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:11.910 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:11.910 Removing: /var/run/dpdk/spdk2/config 00:37:11.910 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:11.910 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:11.910 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:11.910 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:11.910 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:11.910 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:11.910 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:11.910 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:11.910 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:11.910 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:11.910 Removing: /var/run/dpdk/spdk3/config 00:37:11.910 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:11.910 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:11.910 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:11.910 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:11.910 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:11.910 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:11.910 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:11.910 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:11.910 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:11.910 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:11.910 Removing: /var/run/dpdk/spdk4/config 00:37:11.910 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:11.910 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:11.910 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:11.910 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:11.910 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:11.910 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:11.910 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:11.910 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:11.910 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:11.910 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:11.910 Removing: /dev/shm/bdev_svc_trace.1 00:37:11.910 Removing: /dev/shm/nvmf_trace.0 00:37:11.910 Removing: /dev/shm/spdk_tgt_trace.pid1494471 00:37:11.910 Removing: /var/run/dpdk/spdk0 00:37:11.910 Removing: /var/run/dpdk/spdk1 00:37:11.910 Removing: /var/run/dpdk/spdk2 00:37:11.910 Removing: /var/run/dpdk/spdk3 00:37:11.910 Removing: /var/run/dpdk/spdk4 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1492873 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1493622 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1494471 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1494872 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1495565 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1495705 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1496417 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1496429 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1496673 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1497986 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1498907 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1499119 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1499402 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1499602 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1499798 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1499953 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1500111 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1500295 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1500732 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1503085 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1503253 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1503420 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1503435 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1503966 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1503970 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1504396 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1504410 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1505011 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1505104 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1505379 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1505406 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1505871 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1506026 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1506225 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1508288 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1510801 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1517773 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1518180 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1520687 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1520853 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1523477 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1527064 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1529239 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1535524 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1540844 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1542658 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1543329 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1553440 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1555706 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1609232 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1612514 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1616337 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1620160 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1620170 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1620712 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1621359 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1622012 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1622414 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1622418 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1622557 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1622694 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1622697 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1623354 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1624002 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1624545 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1624946 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1625064 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1625200 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1626083 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1626803 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1632234 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1657921 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1660602 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1661766 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1663081 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1663209 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1663231 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1663372 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1663799 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1664998 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1665715 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1666033 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1667637 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1668057 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1668503 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1671016 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1674273 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1677801 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1701450 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1704091 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1707855 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1708802 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1709882 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1712455 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1714802 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1719517 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1719519 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1722291 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1722425 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1722679 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1722950 00:37:11.910 Removing: /var/run/dpdk/spdk_pid1722961 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1724035 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1725325 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1726501 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1727684 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1728863 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1730037 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1733786 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1734171 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1735450 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1736184 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1739785 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1741764 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1745239 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1749102 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1755312 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1759663 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1759666 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1771864 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1772267 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1772673 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1773202 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1773662 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1774185 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1774586 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1775004 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1777491 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1777637 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1782036 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1782084 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1783771 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1788721 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1788727 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1791616 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1792895 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1794294 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1795153 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1796551 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1797443 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1802707 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1803098 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1803498 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1805047 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1805350 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1805722 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1808156 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1808174 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1809522 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1809987 00:37:12.168 Removing: /var/run/dpdk/spdk_pid1810016 00:37:12.168 Clean 00:37:12.168 05:57:05 -- common/autotest_common.sh@1451 -- # return 0 00:37:12.168 05:57:05 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:37:12.168 05:57:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:12.168 05:57:05 -- common/autotest_common.sh@10 -- # set +x 00:37:12.168 05:57:05 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:37:12.168 05:57:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:12.168 05:57:05 -- common/autotest_common.sh@10 -- # set +x 00:37:12.168 05:57:05 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:12.168 05:57:05 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:12.168 05:57:05 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:12.168 05:57:05 -- spdk/autotest.sh@395 -- # hash lcov 00:37:12.168 05:57:05 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:12.168 05:57:05 -- spdk/autotest.sh@397 -- # hostname 00:37:12.168 05:57:05 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:12.425 geninfo: WARNING: invalid characters removed from testname! 00:37:44.489 05:57:33 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:44.489 05:57:37 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:47.017 05:57:40 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:50.298 05:57:43 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:52.823 05:57:46 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:56.139 05:57:49 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:58.664 05:57:52 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:58.664 05:57:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:58.664 05:57:52 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:58.664 05:57:52 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:58.664 05:57:52 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:58.664 05:57:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.664 05:57:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.664 05:57:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.664 05:57:52 -- paths/export.sh@5 -- $ export PATH 00:37:58.664 05:57:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.664 05:57:52 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:58.664 05:57:52 -- common/autobuild_common.sh@447 -- $ date +%s 00:37:58.664 05:57:52 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721879872.XXXXXX 00:37:58.664 05:57:52 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721879872.tjsyWy 00:37:58.664 05:57:52 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:37:58.664 05:57:52 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:37:58.664 05:57:52 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:58.664 05:57:52 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:58.664 05:57:52 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:58.664 05:57:52 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:58.664 05:57:52 -- common/autobuild_common.sh@463 -- $ get_config_params 00:37:58.664 05:57:52 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:37:58.664 05:57:52 -- common/autotest_common.sh@10 -- $ set +x 00:37:58.664 05:57:52 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:58.664 05:57:52 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:37:58.664 05:57:52 -- pm/common@17 -- $ local monitor 00:37:58.664 05:57:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.664 05:57:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.664 05:57:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.664 05:57:52 -- pm/common@21 -- $ date +%s 00:37:58.664 05:57:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.664 05:57:52 -- pm/common@21 -- $ date +%s 00:37:58.664 05:57:52 -- pm/common@25 -- $ sleep 1 00:37:58.664 05:57:52 -- pm/common@21 -- $ date +%s 00:37:58.664 05:57:52 -- pm/common@21 -- $ date +%s 00:37:58.664 05:57:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721879872 00:37:58.664 05:57:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721879872 00:37:58.664 05:57:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721879872 00:37:58.664 05:57:52 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721879872 00:37:58.664 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721879872_collect-vmstat.pm.log 00:37:58.664 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721879872_collect-cpu-load.pm.log 00:37:58.664 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721879872_collect-cpu-temp.pm.log 00:37:58.664 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721879872_collect-bmc-pm.bmc.pm.log 00:37:59.599 05:57:53 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:37:59.599 05:57:53 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:59.599 05:57:53 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:59.599 05:57:53 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:59.599 05:57:53 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:59.599 05:57:53 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:59.599 05:57:53 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:59.599 05:57:53 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:59.599 05:57:53 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:59.599 05:57:53 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:59.855 05:57:53 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:59.855 05:57:53 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:59.855 05:57:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:59.855 05:57:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:59.855 05:57:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:59.855 05:57:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:59.855 05:57:53 -- pm/common@44 -- $ pid=1821766 00:37:59.855 05:57:53 -- pm/common@50 -- $ kill -TERM 1821766 00:37:59.855 05:57:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:59.855 05:57:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:59.855 05:57:53 -- pm/common@44 -- $ pid=1821768 00:37:59.855 05:57:53 -- pm/common@50 -- $ kill -TERM 1821768 00:37:59.855 05:57:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:59.855 05:57:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:59.855 05:57:53 -- pm/common@44 -- $ pid=1821770 00:37:59.855 05:57:53 -- pm/common@50 -- $ kill -TERM 1821770 00:37:59.855 05:57:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:59.855 05:57:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:59.855 05:57:53 -- pm/common@44 -- $ pid=1821801 00:37:59.855 05:57:53 -- pm/common@50 -- $ sudo -E kill -TERM 1821801 00:37:59.855 + [[ -n 1388746 ]] 00:37:59.855 + sudo kill 1388746 00:37:59.862 [Pipeline] } 00:37:59.880 [Pipeline] // stage 00:37:59.887 [Pipeline] } 00:37:59.905 [Pipeline] // timeout 00:37:59.910 [Pipeline] } 00:37:59.928 [Pipeline] // catchError 00:37:59.932 [Pipeline] } 00:37:59.944 [Pipeline] // wrap 00:37:59.950 [Pipeline] } 00:37:59.964 [Pipeline] // catchError 00:37:59.972 [Pipeline] stage 00:37:59.974 [Pipeline] { (Epilogue) 00:37:59.987 [Pipeline] catchError 00:37:59.989 [Pipeline] { 00:38:00.003 [Pipeline] echo 00:38:00.004 Cleanup processes 00:38:00.009 [Pipeline] sh 00:38:00.283 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:00.283 1821931 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:00.283 1822031 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:00.297 [Pipeline] sh 00:38:00.578 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:00.578 ++ grep -v 'sudo pgrep' 00:38:00.578 ++ awk '{print $1}' 00:38:00.578 + sudo kill -9 1821931 00:38:00.591 [Pipeline] sh 00:38:00.871 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:10.844 [Pipeline] sh 00:38:11.148 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:11.148 Artifacts sizes are good 00:38:11.167 [Pipeline] archiveArtifacts 00:38:11.174 Archiving artifacts 00:38:11.381 [Pipeline] sh 00:38:11.662 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:11.678 [Pipeline] cleanWs 00:38:11.688 [WS-CLEANUP] Deleting project workspace... 00:38:11.688 [WS-CLEANUP] Deferred wipeout is used... 00:38:11.695 [WS-CLEANUP] done 00:38:11.697 [Pipeline] } 00:38:11.718 [Pipeline] // catchError 00:38:11.732 [Pipeline] sh 00:38:12.022 + logger -p user.info -t JENKINS-CI 00:38:12.033 [Pipeline] } 00:38:12.051 [Pipeline] // stage 00:38:12.056 [Pipeline] } 00:38:12.074 [Pipeline] // node 00:38:12.080 [Pipeline] End of Pipeline 00:38:12.119 Finished: SUCCESS